Science.gov

Sample records for 3d visualization methods

  1. Symbolic processing methods for 3D visual processing

    NASA Astrophysics Data System (ADS)

    Tedder, Maurice; Hall, Ernest L.

    2001-10-01

    The purpose of this paper is to describe a theory that defines an open method for solving 3D visual data processing and artificial intelligence problems that is independent of hardware or software implementation. The goal of the theory is to generalize and abstract the process of 3D visual processing so that the method can be applied to a wide variety of 3D visual processing problems. Once the theory is described a heuristic derivation is given. Symbolic processing methods can be generalized into an abstract model composed of eight basic components. The symbolic processing model components are: input data; input data interface; symbolic data library; symbolic data environment space; relationship matrix; symbolic logic driver; output data interface and output data. An obstacle detection and avoidance experiment was constructed to demonstrate the symbolic processing method. The results of the robot obstacle avoidance experiment demonstrated that the mobile robot could successfully navigate the obstacle course using symbolic processing methods for the control software. The significance of the symbolic processing approach is that the method arrived at a solution by using a more formal quantifiable process. Some of the practical applications for this theory are: 3D object recognition, obstacle avoidance, and intelligent robot control.

  2. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  3. 3D reservoir visualization

    SciTech Connect

    Van, B.T.; Pajon, J.L.; Joseph, P. )

    1991-11-01

    This paper shows how some simple 3D computer graphics tools can be combined to provide efficient software for visualizing and analyzing data obtained from reservoir simulators and geological simulations. The animation and interactive capabilities of the software quickly provide a deep understanding of the fluid-flow behavior and an accurate idea of the internal architecture of a reservoir.

  4. Sector mapping method for 3D detached retina visualization.

    PubMed

    Zhai, Yi-Ran; Zhao, Yong; Zhong, Jie; Li, Ke; Lu, Cui-Xin; Zhang, Bing

    2016-10-01

    A new sphere-mapping algorithm called sector mapping is introduced to map sector images to the sphere of an eyeball. The proposed sector-mapping algorithm is evaluated and compared with the plane-mapping algorithm adopted in previous work. A simulation that maps an image of concentric circles to the sphere of the eyeball and an analysis of the difference in distance between neighboring points in a plane and sector were used to compare the two mapping algorithms. A three-dimensional model of a whole retina with clear retinal detachment was generated using the Visualization Toolkit software. A comparison of the mapping results shows that the central part of the retina near the optic disc is stretched and its edges are compressed when the plane-mapping algorithm is used. A better mapping result is obtained by the sector-mapping algorithm than by the plane-mapping algorithm in both the simulation results and real clinical retinal detachment three-dimensional reconstruction. PMID:27480739

  5. [A positioning error measurement method in radiotherapy based on 3D visualization].

    PubMed

    An, Ji-Ye; Li, Yue-Xi; Lu, Xu-Dong; Duan, Hui-Long

    2007-09-01

    The positioning error in radiotherapy is one of the most important factors that influence the location precision of the tumor. Based on the CT-on-rails technology, this paper describes the research on measuring the positioning error in radiotherapy by comparing the planning CT images with the treatment CT images using 3-dimension (3D) methods. It can help doctors to measure positioning errors more accurately than 2D methods. It also supports the powerful 3D interaction such as drag-dropping, rotating and picking-up the object, so that doctors can visualize and measure the positioning errors intuitively.

  6. A 3-D visualization method for image-guided brain surgery.

    PubMed

    Bourbakis, N G; Awad, M

    2003-01-01

    This paper deals with a 3D methodology for brain tumor image-guided surgery. The methodology is based on development of a visualization process that mimics the human surgeon behavior and decision-making. In particular, it originally constructs a 3D representation of a tumor by using the segmented version of the 2D MRI images. Then it develops an optimal path for the tumor extraction based on minimizing the surgical effort and penetration area. A cost function, incorporated in this process, minimizes the damage surrounding healthy tissues taking into consideration the constraints of a new snake-like surgical tool proposed here. The tumor extraction method presented in this paper is compared with the ordinary method used on brain surgery, which is based on a straight-line based surgical tool. Illustrative examples based on real simulations present the advantages of the 3D methodology proposed here.

  7. GPU-Based Visualization of 3D Fluid Interfaces using Level Set Methods

    NASA Astrophysics Data System (ADS)

    Kadlec, B. J.

    2009-12-01

    We model a simple 3D fluid-interface problem using the level set method and visualize the interface as a dynamic surface. Level set methods allow implicit handling of complex topologies deformed by evolutions where sharp changes and cusps are present without destroying the representation. We present a highly optimized visualization and computation algorithm that is implemented in CUDA to run on the NVIDIA GeForce 295 GTX. CUDA is a general purpose parallel computing architecture that allows the NVIDIA GPU to be treated like a data parallel supercomputer in order to solve many computational problems in a fraction of the time required on a CPU. CUDA is compared to the new OpenCL™ (Open Computing Language), which is designed to run on heterogeneous computing environments but does not take advantage of low-level features in NVIDIA hardware that provide significant speedups. Therefore, our technique is implemented using CUDA and results are compared to a single CPU implementation to show the benefits of using the GPU and CUDA for visualizing fluid-interface problems. We solve a 1024^3 problem and experience significant speedup using the NVIDIA GeForce 295 GTX. Implementation details for mapping the problem to the GPU architecture are described as well as discussion on porting the technique to heterogeneous devices (AMD, Intel, IBM) using OpenCL. The results present a new interactive system for computing and visualizing the evolution of fluid interface problems on the GPU.

  8. A segmentation method for 3D visualization of neurons imaged with a confocal laser scanning microscope

    NASA Astrophysics Data System (ADS)

    Anderson, Jeffrey R.; Barrett, Steven F.; Wilcox, Michael J.

    2005-04-01

    Our understanding of the world around us is based primarily on three-dimensional information because of the environment in which we live and interact. Medical or biological image information is often collected in the form of two-dimensional, serial section images. As such, it is difficult for the observer to mentally reconstruct the three dimensional features of each object. Although many image rendering software packages allow for 3D views of the serial sections, they lack the ability to segment, or isolate different objects in the data set. Segmentation is the key to creating 3D renderings of distinct objects from serial slice images, like separate pieces to a puzzle. This paper describes a segmentation method for objects recorded with serial section images. The user defines threshold levels and object labels on a single image of the data set that are subsequently used to automatically segment each object in the remaining images of the same data set, while maintaining boundaries between contacting objects. The performance of the algorithm is verified using mathematically defined shapes. It is then applied to the visual neurons of the housefly, Musca domestica. Knowledge of the fly"s visual system may lead to improved machine visions systems. This effort has provided the impetus to develop this segmentation algorithm. The described segmentation method can be applied to any high contrast serial slice data set that is well aligned and registered. The medical field alone has many applications for rapid generation of 3D segmented models from MRI and other medical imaging modalities.

  9. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    SciTech Connect

    Wong, S.T.C.

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  10. Optimization of site characterization and remediation methods using 3-D geoscience modeling and visualization techniques

    SciTech Connect

    Hedegaard, R.F.; Ho, J.; Eisert, J.

    1996-12-31

    Three-dimensional (3-D) geoscience volume modeling can be used to improve the efficiency of the environmental investigation and remediation process. At several unsaturated zone spill sites at two Superfund (CERCLA) sites (Military Installations) in California, all aspects of subsurface contamination have been characterized using an integrated computerized approach. With the aide of software such as LYNX GMS{trademark}, Wavefront`s Data Visualizer{trademark} and Gstools (public domain), the authors have created a central platform from which to map a contaminant plume, visualize the same plume three-dimensionally, and calculate volumes of contaminated soil or groundwater above important health risk thresholds. The developed methodology allows rapid data inspection for decisions such that the characterization process and remedial action design are optimized. By using the 3-D geoscience modeling and visualization techniques, the technical staff are able to evaluate the completeness and spatial variability of the data and conduct 3-D geostatistical predictions of contaminant and lithologic distributions. The geometry of each plume is estimated using 3-D variography on raw analyte values and indicator thresholds for the kriged model. Three-dimensional lithologic interpretation is based on either {open_quote}linked{close_quote} parallel cross sections or on kriged grid estimations derived from borehole data coded with permeability indicator thresholds. Investigative borings, as well as soil vapor extraction/injection wells, are sighted and excavation costs are estimated using these results. The principal advantages of the technique are the efficiency and rapidity with which meaningful results are obtained and the enhanced visualization capability which is a desirable medium to communicate with both the technical staff as well as nontechnical audiences.

  11. A 3D Visualization Method for Bladder Filling Examination Based on EIT

    PubMed Central

    He, Wei; Ran, Peng; Xu, Zheng; Li, Bing; Li, Song-nong

    2012-01-01

    As the researches of electric impedance tomography (EIT) applications in medical examinations deepen, we attempt to produce the visualization of 3D images of human bladder. In this paper, a planar electrode array system will be introduced as the measuring platform and a series of feasible methods are proposed to evaluate the simulated volume of bladder to avoid overfilling. The combined regularization algorithm enhances the spatial resolution and presents distinguishable sketch of disturbances from the background, which provides us with reliable data from inverse problem to carry on to the three-dimensional reconstruction. By detecting the edge elements and tracking down the lost information, we extract quantitative morphological features of the object from the noises and background. Preliminary measurements were conducted and the results showed that the proposed algorithm overcomes the defects of holes, protrusions, and debris in reconstruction. In addition, the targets' location in space and roughly volume could be calculated according to the grid of finite element of the model, and this feature was never achievable for the previous 2D imaging. PMID:23365617

  12. TRAIL protein localization in human primary T cells by 3D microscopy using 3D interactive surface plot: a new method to visualize plasma membrane.

    PubMed

    Gras, Christophe; Smith, Nikaïa; Sengmanivong, Lucie; Gandini, Mariana; Kubelka, Claire Fernandes; Herbeuval, Jean-Philippe

    2013-01-31

    The apoptotic ligand TNF-related apoptosis ligand (TRAIL) is expressed on the membrane of immune cells during HIV infection. The intracellular stockade of TRAIL in human primary CD4(+) T cells is not known. Here we investigated whether primary CD4(+) T cells expressed TRAIL in their intracellular compartment and whether TRAIL is relocalized on the plasma membrane under HIV activation. We found that TRAIL protein was stocked in intracellular compartment in non activated CD4(+) T cells and that the total level of TRAIL protein was not increased under HIV-1 stimulation. However, TRAIL was massively relocalized on plasma membrane when cells were cultured with HIV. Using three dimensional (3D) microscopy we localized TRAIL protein in human T cells and developed a new method to visualize plasma membrane without the need of a membrane marker. This method used the 3D interactive surface plot and bright light acquired images. PMID:23085529

  13. TRAIL protein localization in human primary T cells by 3D microscopy using 3D interactive surface plot: a new method to visualize plasma membrane.

    PubMed

    Gras, Christophe; Smith, Nikaïa; Sengmanivong, Lucie; Gandini, Mariana; Kubelka, Claire Fernandes; Herbeuval, Jean-Philippe

    2013-01-31

    The apoptotic ligand TNF-related apoptosis ligand (TRAIL) is expressed on the membrane of immune cells during HIV infection. The intracellular stockade of TRAIL in human primary CD4(+) T cells is not known. Here we investigated whether primary CD4(+) T cells expressed TRAIL in their intracellular compartment and whether TRAIL is relocalized on the plasma membrane under HIV activation. We found that TRAIL protein was stocked in intracellular compartment in non activated CD4(+) T cells and that the total level of TRAIL protein was not increased under HIV-1 stimulation. However, TRAIL was massively relocalized on plasma membrane when cells were cultured with HIV. Using three dimensional (3D) microscopy we localized TRAIL protein in human T cells and developed a new method to visualize plasma membrane without the need of a membrane marker. This method used the 3D interactive surface plot and bright light acquired images.

  14. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  15. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  16. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  17. Identifying Key Structural Features and Spatial Relationships in Archean Microbialites Using 2D and 3D Visualization Methods

    NASA Astrophysics Data System (ADS)

    Stevens, E. W.; Sumner, D. Y.

    2009-12-01

    Microbialites in the 2521 ± 3 Ma Gamohaan Formation, South Africa, have several different end-member morphologies which show distinct growth structures and spatial relationships. We characterized several growth structures and spatial relationships in two samples (DK20 and 2_06) using a combination of 2D and 3D analytical techniques. There are two main goals in studying complicated microbialites with a combination of 2D and 3D methods. First, one can better understand microbialite growth by identifying important structures and structural relationships. Once structures are identified, the order in which the structures formed and how they are related can be inferred from observations of crosscutting relationships. Second, it is important to use both 2D and 3D methods to correlate 3D observations with those in 2D that are more common in the field. Combining analysis provides significantly more insight into the 3D morphology of microbial structures. In our studies, 2D analysis consisted of describing polished slabs and serial sections created by grinding down the rock 100 microns at a time. 3D analysis was performed on serial sections visualized in 3D using Vrui and 3DVisualizer software developed at KeckCAVES, UCD (http://keckcaves.org). Data were visualized on a laptop and in an immersive cave system. Both samples contain microbial laminae and more vertically orients microbial "walls" called supports. The relationships between these features created voids now filled with herringbone and blocky calcite crystals. DK20, a classic plumose structure, contains two types of support structures. Both are 1st order structures (1st order structures with organic inclusions and 1st without organic inclusions) interpreted as planar features based on 2D analysis. In the 2D analysis the 1st order structures show v branching relationships as well as single cuspate relationships (two 1st order structures with inclusions merging upward), and single tented relationships (three supports

  18. 3D Heart: a new visual training method for electrocardiographic analysis.

    PubMed

    Olson, Charles W; Lange, David; Chan, Jack-Kang; Olson, Kim E; Albano, Alfred; Wagner, Galen S; Selvester, Ronald H S

    2007-01-01

    This new training method is based on developing a sound understanding of the sequence in which electrical excitation spreads through both the normal and the infarcted myocardium. The student is made aware of the cardiac electrical performance through a series of 3-dimensional pictures during the excitation process. The electrocardiogram 3D Heart 3-dimensional program contains a variety of different activation simulations. Currently, this program enables the user to view the activation simulation for all of the following pathology examples: normal activation; large, medium, and small anterior myocardial infarction (MI); large, medium, and small posterolateral MI; large, medium, and small inferior MI. Simulations relating to other cardiac abnormalities, such as bundle branch block and left ventricular hypertrophy fasicular block, are being developed as part of a National Institute of Health (NIH) Phase 1 Small Business Innovation Research (SBIR) program. PMID:17604044

  19. Teaching Geography with 3-D Visualization Technology

    ERIC Educational Resources Information Center

    Anthamatten, Peter; Ziegler, Susy S.

    2006-01-01

    Technology that helps students view images in three dimensions (3-D) can support a broad range of learning styles. "Geo-Wall systems" are visualization tools that allow scientists, teachers, and students to project stereographic images and view them in 3-D. We developed and presented 3-D visualization exercises in several undergraduate courses.…

  20. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.

    2009-04-01

    Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method Selma KADIOGLU Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr Anatolia has always been more the point of transit, a bridge between West and East. Anatolia has been a home for ideas moving from all directions. So it is that in the Roman and post-Roman periods the role of Anatolia in general and of Ancyra (the Roman name of Ankara) in particular was of the greatest importance. Now, the visible archaeological remains of Roman period in Ankara are Roman Bath, Gymnasium, the Temple of Augustus of Rome, Street, Theatre, City Defence-Wall. The Caesar Augustus, the first Roman Emperor, conquered Asia Minor in 25 BC. Then a marble temple was built in Ancyra, the administrative capital of province, today the capital of Turkish Republic, Ankara. This monument was consecrated to the Empreror and to the Goddess Rome. This temple is supposed to have built over an earlier temple dedicated to Kybele and Men between 25 -20 BC. After the death of the Augustus in 14AD, a copy of the text of "Res Gestae Divi Augusti" was inscribed on the interior of the pronaos in Latin, whereas a Greek translation is also present on an exterior wall of the cella. In the 5th century, it was converted in to a church by the Byzantines. The aim of this study is to determine old buried archaeological remains in the Augustus temple, Roman Bath and in the governorship agora in Ulus district. These remains were imaged with transparent three dimensional (3D) visualization of the ground penetrating radar (GPR) data. Parallel two dimensional (2D) GPR profile data were acquired in the study areas, and then a 3D data volume were built using parallel 2D GPR data. A simplified amplitude-colour range and appropriate opacity function were constructed and transparent 3D image were obtained to activate buried

  1. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions. PMID:19008558

  2. Method for visualization and presentation of priceless old prints based on precise 3D scan

    NASA Astrophysics Data System (ADS)

    Bunsch, Eryk; Sitnik, Robert

    2014-02-01

    Graphic prints and manuscripts constitute main part of the cultural heritage objects created by the most of the known civilizations. Their presentation was always a problem due to their high sensitivity to light and changes of external conditions (temperature, humidity). Today it is possible to use an advanced digitalization techniques for documentation and visualization of mentioned objects. In the situation when presentation of the original heritage object is impossible, there is a need to develop a method allowing documentation and then presentation to the audience of all the aesthetical features of the object. During the course of the project scans of several pages of one of the most valuable books in collection of Museum of Warsaw Archdiocese were performed. The book known as "Great Dürer Trilogy" consists of three series of woodcuts by the Albrecht Dürer. The measurement system used consists of a custom designed, structured light-based, high-resolution measurement head with automated digitization system mounted on the industrial robot. This device was custom built to meet conservators' requirements, especially the lack of ultraviolet or infrared radiation emission in the direction of measured object. Documentation of one page from the book requires about 380 directional measurements which constitute about 3 billion sample points. The distance between the points in the cloud is 20 μm. Provided that the measurement with MSD (measurement sampling density) of 2500 points makes it possible to show to the publicity the spatial structure of this graphics print. An important aspect is the complexity of the software environment created for data processing, in which massive data sets can be automatically processed and visualized. Very important advantage of the software which is using directly clouds of points is the possibility to manipulate freely virtual light source.

  3. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  4. 3D visualization of polymer nanostructure

    SciTech Connect

    Werner, James H

    2009-01-01

    Soft materials and structured polymers are extremely useful nanotechnology building blocks. Block copolymers, in particular, have served as 2D masks for nanolithography and 3D scaffolds for photonic crystals, nanoparticle fabrication, and solar cells. F or many of these applications, the precise 3 dimensional structure and the number and type of defects in the polymer is important for ultimate function. However, directly visualizing the 3D structure of a soft material from the nanometer to millimeter length scales is a significant technical challenge. Here, we propose to develop the instrumentation needed for direct 3D structure determination at near nanometer resolution throughout a nearly millimeter-cubed volume of a soft, potentially heterogeneous, material. This new capability will be a valuable research tool for LANL missions in chemistry, materials science, and nanoscience. Our approach to soft materials visualization builds upon exciting developments in super-resolution optical microscopy that have occurred over the past two years. To date, these new, truly revolutionary, imaging methods have been developed and almost exclusively used for biological applications. However, in addition to biological cells, these super-resolution imaging techniques hold extreme promise for direct visualization of many important nanostructured polymers and other heterogeneous chemical systems. Los Alamos has a unique opportunity to lead the development of these super-resolution imaging methods for problems of chemical rather than biological significance. While these optical methods are limited to systems transparent to visible wavelengths, we stress that many important functional chemicals such as polymers, glasses, sol-gels, aerogels, or colloidal assemblies meet this requirement, with specific examples including materials designed for optical communication, manipulation, or light-harvesting Our Research Goals are: (1) Develop the instrumentation necessary for imaging materials

  5. Visualization methods for high-resolution, transient, 3-D, finite element situations

    SciTech Connect

    Christon, M.A.

    1995-01-10

    Scientific visualization is the process whereby numerical data is transformed into a visual form to augment the process of discovery and understanding. Visualizing the data generated by large-scale, transient, three-dimensional finite element simulations poses many challenges due to geometric complexity, the presence of multiple materials and multiple element types, and the inherent unstructured nature of the meshes. In this paper, the direct use of finite element data structures, nodal assembly procedures, and element interpolants for volumetric adaptive surface extraction, surface rendering, vector grids and particle tracing is discussed. A brief description of a {open_quotes}direct-to-disk{close_quotes} animation system is presented, and case studies which demonstrate the use of isosurfaces, vector plots, cutting planes, reference surfaces and particle tracing are then discussed in the context of several case studies for transient incompressible viscous flow, and acoustic fluid-structure interaction simulations. An overview of the implications of massively parallel computers on visualization is presented to highlight the issues in parallel visualization methodology, algorithms. data locality and the ultimate requirements for temporary and archival data storage and network bandwidth.

  6. 3-D Flyover Visualization of Veil Nebula

    NASA Video Gallery

    This 3-D visualization flies across a small portion of the Veil Nebula as photographed by the Hubble Space Telescope. This region is a small part of a huge expanding remnant from a star that explod...

  7. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  8. 3-D visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-02-01

    We present Met.3D, a new open-source tool for the interactive 3-D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantitites. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 campaign.

  9. Diffractive optical element for creating visual 3D images.

    PubMed

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  10. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  11. Techniques for interactive 3-D scientific visualization

    SciTech Connect

    Glinert, E.P. . Dept. of Computer Science); Blattner, M.M. Hospital and Tumor Inst., Houston, TX . Dept. of Biomathematics California Univ., Davis, CA . Dept. of Applied Science Lawrence Livermore National Lab., CA ); Becker, B.G. . Dept. of Applied Science Lawrence Livermore National La

    1990-09-24

    Interest in interactive 3-D graphics has exploded of late, fueled by (a) the allure of using scientific visualization to go where no-one has gone before'' and (b) by the development of new input devices which overcome some of the limitations imposed in the past by technology, yet which may be ill-suited to the kinds of interaction required by researchers active in scientific visualization. To resolve this tension, we propose a flat 5-D'' environment in which 2-D graphics are augmented by exploiting multiple human sensory modalities using cheap, conventional hardware readily available with personal computers and workstations. We discuss how interactions basic to 3-D scientific visualization, like searching a solution space and comparing two such spaces, are effectively carried out in our environment. Finally, we describe 3DMOVE, an experimental microworld we have implemented to test out some of our ideas. 40 refs., 4 figs.

  12. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research

    PubMed Central

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2013-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists’ demands for qualitative analysis of confocal microscopy data. PMID:23584131

  13. FluoRender: An Application of 2D Image Space Methods for 3D and 4D Confocal Microscopy Data Visualization in Neurobiology Research.

    PubMed

    Wan, Yong; Otsuna, Hideo; Chien, Chi-Bin; Hansen, Charles

    2012-01-01

    2D image space methods are processing methods applied after the volumetric data are projected and rendered into the 2D image space, such as 2D filtering, tone mapping and compositing. In the application domain of volume visualization, most 2D image space methods can be carried out more efficiently than their 3D counterparts. Most importantly, 2D image space methods can be used to enhance volume visualization quality when applied together with volume rendering methods. In this paper, we present and discuss the applications of a series of 2D image space methods as enhancements to confocal microscopy visualizations, including 2D tone mapping, 2D compositing, and 2D color mapping. These methods are easily integrated with our existing confocal visualization tool, FluoRender, and the outcome is a full-featured visualization system that meets neurobiologists' demands for qualitative analysis of confocal microscopy data.

  14. Faster, higher quality volume visualization for 3D medical imaging

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Laine, Andrew F.; Song, Ting

    2008-03-01

    The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP) and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images. We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to tomographic image reconstruction. This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i) reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the reconstructed 3D image As well as improvements in computation speed, this method also results in improvements in visualization quality, and in the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage. In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as for MRI.

  15. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  16. Real-time depth map manipulation for 3D visualization

    NASA Astrophysics Data System (ADS)

    Ideses, Ianir; Fishbain, Barak; Yaroslavsky, Leonid

    2009-02-01

    One of the key aspects of 3D visualization is computation of depth maps. Depth maps enables synthesis of 3D video from 2D video and use of multi-view displays. Depth maps can be acquired in several ways. One method is to measure the real 3D properties of the scene objects. Other methods rely on using two cameras and computing the correspondence for each pixel. Once a depth map is acquired for every frame, it can be used to construct its artificial stereo pair. There are many known methods for computing the optical flow between adjacent video frames. The drawback of these methods is that they require extensive computation power and are not very well suited to high quality real-time 3D rendering. One efficient method for computing depth maps is extraction of motion vector information from standard video encoders. In this paper we present methods to improve the 3D visualization quality acquired from compression CODECS by spatial/temporal and logical operations and manipulations. We show how an efficient real time implementation of spatial-temporal local order statistics such as median and local adaptive filtering in 3D-DCT domain can substantially improve the quality of depth maps and consequently 3D video while retaining real-time rendering. Real-time performance is achived by utilizing multi-core technology using standard parallelization algorithms and libraries (OpenMP, IPP).

  17. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    ERIC Educational Resources Information Center

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  18. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  19. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  20. Met.3D - a new open-source tool for interactive 3D visualization of ensemble weather forecasts

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Kern, Michael; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We introduce Met.3D, a new open-source tool for the interactive 3D visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns, however, is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output -- 3D visualisation, ensemble visualization, and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2D visualization methods commonly used in meteorology to 3D visualization by combining both visualization types in a 3D context. It implements methods that address the issue of spatial perception in the 3D view as well as approaches to using the ensemble in order to assess forecast uncertainty. Interactivity is key to the Met.3D approach. The tool uses modern graphics hardware technology to achieve interactive visualization of present-day numerical weather prediction datasets on standard consumer hardware. Met.3D supports forecast data from the European Centre for Medium Range Weather Forecasts and operates directly on ECMWF hybrid sigma-pressure level grids. In this presentation, we provide an overview of the software --illustrated with short video examples--, and give information on its availability.

  1. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  2. Dynamic 3D Visualization of Vocal Tract Shaping During Speech

    PubMed Central

    Zhu, Yinghua; Kim, Yoon-Chul; Proctor, Michael I.; Narayanan, Shrikanth S.; Nayak, Krishna S.

    2014-01-01

    Noninvasive imaging is widely used in speech research as a means to investigate the shaping and dynamics of the vocal tract during speech production. 3D dynamic MRI would be a major advance, as it would provide 3D dynamic visualization of the entire vocal tract. We present a novel method for the creation of 3D dynamic movies of vocal tract shaping based on the acquisition of 2D dynamic data from parallel slices and temporal alignment of the image sequences using audio information. Multiple sagittal 2D real-time movies with synchronized audio recordings are acquired for English vowel-consonant-vowel stimuli /ala/, /aɹa/, /asa/ and /aʃa/. Audio data are aligned using mel-frequency cepstral coefficients (MFCC) extracted from windowed intervals of the speech signal. Sagittal image sequences acquired from all slices are then aligned using dynamic time warping (DTW). The aligned image sequences enable dynamic 3D visualization by creating synthesized movies of the moving airway in the coronal planes, visualizing desired tissue surfaces and tube-shaped vocal tract airway after manual segmentation of targeted articulators and smoothing. The resulting volumes allow for dynamic 3D visualization of salient aspects of lingual articulation, including the formation of tongue grooves and sublingual cavities, with a temporal resolution of 78 ms. PMID:23204279

  3. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  4. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  5. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  6. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  7. Improvements in the Visualization of Stereoscopic 3D Imagery

    NASA Astrophysics Data System (ADS)

    Gurrieri, Luis E.

    2015-09-01

    A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve the visual comfort that introduce depth distortions, in the stereoscopic visual media, this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.

  8. Visualization of 2-D and 3-D Tensor Fields

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1997-01-01

    In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyper-streamlines about these points give rise to tri-sector or wedge points. These singularities and their connecting hyper-streamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find , analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.

  9. Visualization of 2-D and 3-D Tensor Fields

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1995-01-01

    In previous work we have developed a novel approach to visualizing second order symmetric 2-D tensor fields based on degenerate point analysis. At degenerate points the eigenvalues are either zero or equal to each other, and the hyperstreamlines about these points give rise to trisector or wedge points. These singularities and their connecting hyperstreamlines determine the topology of the tensor field. In this study we are developing new methods for analyzing and displaying 3-D tensor fields. This problem is considerably more difficult than the 2-D one, as the richness of the data set is much larger. Here we report on our progress and a novel method to find, analyze and display 3-D degenerate points. First we discuss the theory, then an application involving a 3-D tensor field, the Boussinesq problem with two forces.

  10. Evaluation of the User Strategy on 2d and 3d City Maps Based on Novel Scanpath Comparison Method and Graph Visualization

    NASA Astrophysics Data System (ADS)

    Dolezalova, J.; Popelka, S.

    2016-06-01

    The paper is dealing with scanpath comparison of eye-tracking data recorded during case study focused on the evaluation of 2D and 3D city maps. The experiment contained screenshots from three map portals. Two types of maps were used - standard map and 3D visualization. Respondents' task was to find particular point symbol on the map as fast as possible. Scanpath comparison is one group of the eye-tracking data analyses methods used for revealing the strategy of the respondents. In cartographic studies, the most commonly used application for scanpath comparison is eyePatterns that output is hierarchical clustering and a tree graph representing the relationships between analysed sequences. During an analysis of the algorithm generating a tree graph, it was found that the outputs do not correspond to the reality. We proceeded to the creation of a new tool called ScanGraph. This tool uses visualization of cliques in simple graphs and is freely available at www.eyetracking.upol.cz/scangraph. Results of the study proved the functionality of the tool and its suitability for analyses of different strategies of map readers. Based on the results of the tool, similar scanpaths were selected, and groups of respondents with similar strategies were identified. With this knowledge, it is possible to analyse the relationship between belonging to the group with similar strategy and data gathered from the questionnaire (age, sex, cartographic knowledge, etc.) or type of stimuli (2D, 3D map).

  11. 3D visualization of port simulation.

    SciTech Connect

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  12. 3D visualization for medical volume segmentation validation

    NASA Astrophysics Data System (ADS)

    Eldeib, Ayman M.

    2002-05-01

    This paper presents a 3-D visualization tool that manipulates and/or enhances by user input the segmented targets and other organs. A 3-D visualization tool is developed to create a precise and realistic 3-D model from CT/MR data set for manipulation in 3-D and permitting physician or planner to look through, around, and inside the various structures. The 3-D visualization tool is designed to assist and to evaluate the segmentation process. It can control the transparency of each 3-D object. It displays in one view a 2-D slice (axial, coronal, and/or sagittal)within a 3-D model of the segmented tumor or structures. This helps the radiotherapist or the operator to evaluate the adequacy of the generated target compared to the original 2-D slices. The graphical interface enables the operator to easily select a specific 2-D slice of the 3-D volume data set. The operator is enabled to manually override and adjust the automated segmentation results. After correction, the operator can see the 3-D model again and go back and forth till satisfactory segmentation is obtained. The novelty of this research work is in using state-of-the-art of image processing and 3-D visualization techniques to facilitate a process of a medical volume segmentation validation and assure the accuracy of the volume measurement of the structure of interest.

  13. 3D-printer visualization of neuron models

    PubMed Central

    McDougal, Robert A.; Shepherd, Gordon M.

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases. PMID:26175684

  14. 3D-printer visualization of neuron models.

    PubMed

    McDougal, Robert A; Shepherd, Gordon M

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  15. Dynamic 3D visual analytic tools: a method for maintaining situational awareness during high tempo warfare or mass casualty operations

    NASA Astrophysics Data System (ADS)

    Lizotte, Todd E.

    2010-04-01

    Maintaining Situational Awareness (SA) is crucial to the success of high tempo operations, such as war fighting and mass casualty events (bioterrorism, natural disasters). Modern computer and software applications attempt to provide command and control manager's situational awareness via the collection, integration, interrogation and display of vast amounts of analytic data in real-time from a multitude of data sources and formats [1]. At what point does the data volume and displays begin to erode the hierarchical distributive intelligence, command and control structure of the operation taking place? In many cases, people tasked with making decisions, have insufficient experience in SA of high tempo operations and become overwhelmed easily as vast amounts of data begin to be displayed in real-time as an operation unfolds. In these situations, where data is plentiful and the relevance of the data changes rapidly, there is a chance for individuals to target fixate on those data sources they are most familiar. If these individuals fall into this type of pitfall, they will exclude other data that might be just as important to the success of the operation. To counter these issues, it is important that the computer and software applications provide a means for prompting its users to take notice of adverse conditions or trends that are critical to the operation. This paper will discuss a new method of displaying data called a Crisis ViewTM, that monitors critical variables that are dynamically changing and allows preset thresholds to be created to prompt the user when decisions need to be made and when adverse or positive trends are detected. The new method will be explained in basic terms, with examples of its attributes and how it can be implemented.

  16. 3D reconstruction and visualization of plant leaves

    NASA Astrophysics Data System (ADS)

    Gu, Xiaomeng; Xu, Lihong; Li, Dawei; Zhang, Peng

    2015-03-01

    In this paper, a three-dimensional reconstruction method, which is based on point clouds and texture images, is used to realize the visualization of leaves of greenhouse crops. We take Epipremnum aureum as the object for study and focus on applying the triangular meshing method to organize and categorize scattered point cloud input data of leaves, and then construct a triangulated surface with interconnection topology to simulate the real surface of the object. At last we texture-map the leaf surface with real images to present a life-like 3D model which can be used to simulate the growth of greenhouse plants.

  17. Visualization and Analysis of 3D Gene Expression Data

    SciTech Connect

    Bethel, E. Wes; Rubel, Oliver; Weber, Gunther H.; Hamann, Bernd; Hagen, Hans

    2007-10-25

    Recent methods for extracting precise measurements ofspatial gene expression patterns from three-dimensional (3D) image dataopens the way for new analysis of the complex gene regulatory networkscontrolling animal development. To support analysis of this novel andhighly complex data we developed PointCloudXplore (PCX), an integratedvisualization framework that supports dedicated multi-modal, physical andinformation visualization views along with algorithms to aid in analyzingthe relationships between gene expression levels. Using PCX, we helpedour science stakeholders to address many questions in 3D gene expressionresearch, e.g., to objectively define spatial pattern boundaries andtemporal profiles of genes and to analyze how mRNA patterns arecontrolled by their regulatory transcription factors.

  18. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  19. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  20. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  1. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  2. A Volume Rendering Framework for Visualizing 3D Flow Fields

    NASA Astrophysics Data System (ADS)

    Hsieh, Hsien-Hsi; Li, Liya; Shen, Han-Wei; Tai, Wen-Kai

    In this paper, we present a volume rendering framework for visualizing 3D flow fields. We introduce the concept of coherence field which evaluates the representativeness of a given streamline set for the underlying 3D vector field. Visualization of the coherence field can provide effective visual feedback to the user for incremental insertion of more streamline seeds. Given an initial set of streamlines, a coherence volume is constructed from a distance field to measure the similarity between the existing streamlines and those in their nearby regions based on the difference between the approximate and the actual vector directions. With the visual feedback obtained from rendering the coherence volume, new streamline seeds can be selected by the user or by a heuristic seed selection algorithm to adaptively improve the coherence volume. An improved volume rendering technique that can render user-defined appearance textures is proposed to facilitate macro-visualization of 3D vector fields.

  3. Visual search is influenced by 3D spatial layout.

    PubMed

    Finlayson, Nonie J; Grove, Philip M

    2015-10-01

    Many activities necessitate the deployment of attention to specific distances and directions in our three-dimensional (3D) environment. However, most research on how attention is deployed is conducted with two dimensional (2D) computer displays, leaving a large gap in our understanding about the deployment of attention in 3D space. We report how each of four parameters of 3D visual space influence visual search: 3D display volume, distance in depth, number of depth planes, and relative target position in depth. Using a search task, we find that visual search performance depends on 3D volume, relative target position in depth, and number of depth planes. Our results demonstrate an asymmetrical preference for targets in the front of a display unique to 3D search and show that arranging items into more depth planes reduces search efficiency. Consistent with research using 2D displays, we found slower response times to find targets in displays with larger 3D volumes compared with smaller 3D volumes. Finally, in contrast to the importance of target depth relative to other distractors, target depth relative to the fixation point did not affect response times or search efficiency.

  4. Examination of 3D visual attention in stereoscopic video content

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Schiatti, Luca

    2011-03-01

    Recent advances in video technology and digital cinema have made it possible to produce entertaining 3D stereoscopic content that can be viewed for an extended duration without necessarily causing extreme fatigue, visual strain and discomfort. Viewers focus naturally their attention on specific areas of interest in their visual field. Visual attention is an important aspect of perception and its understanding is therefore an important aspect for the creation of 3D stereoscopic content. Most of the studies on visual attention have focused on the case of still images or 2D video. Only a very few studies have investigated eye movement patterns in 3D stereoscopic moving sequences, and how these may differ from viewing 2D video content. In this paper, we present and discuss the results of a subjective experiment that we conducted using an eye-tracking apparatus to record observers' gaze patterns. Participants were asked to watch the same set of video clips in a free-viewing task. Each clip was shown in a 3D stereoscopic version and 2D version. Our results indicate that the extent of areas of interests is not necessarily wider in 3D. We found a very strong content dependency in the difference of density and locations of fixations between 2D and 3D stereoscopic content. However, we found that saccades were overall faster and that fixation durations were overall lower when observers viewed the 3D stereoscopic version.

  5. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  6. 3D visualization of middle ear structures

    NASA Astrophysics Data System (ADS)

    Vogel, Uwe; Schmitt, Thomas

    1998-06-01

    application of a micro- tomographic imaging device. Therefore an X-ray beam focused down to few microns passes the object in a tomographic arrangement. Subsequently the slices become reconstructed. Generally spatial resolution down to 10 micrometer may be obtained by using this procedure. But there exist few devices only, it is not available as standard equipment. The best results concerning spatial resolution should be achieved by applying conventional histologic sectioning techniques. Of course the target will become destroyed during the procedure. It is cut into sections (e.g., 10 micrometer thick), every layer is stained, and the image acquired and stored by a digital still-camera with appropriate resolution (e.g., 2024 X 3036). Three-dimensional reconstruction is done with the computer. The staining allows visual selection of bones and soft tissues, resolutions down to 10 micrometer are possible without target segmentation. But there arise some practical problems. Mainly the geometric context of the layers is affected by the cutting procedure, especially if cutting bone. Another problem performs the adjustment of the -- possibly distorted -- slices to each other. Artificial markers are necessary, which could allow automatic adjustment too. But the introduction and imaging of the markers is difficult inside the temporal bone specimen, that is interspersed by several cavities. Of course the internal target structures must not be destroyed by the marker introduction. Furthermore the embedding compound could disturb the image acquisition, e.g., by optical scattering of paraffin. A related alternative is given by layered ablation/grinding and imaging of the top layer. This saves the geometric consistency, but requires very tricky and time-consuming embedding procedures. Both approaches require considerable expenditures. The possible approaches are evaluated in detail and first results are compared. So far none of the above-mentioned procedures has been established as a

  7. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  8. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  9. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  10. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  11. 3D Visualization of Astronomical Data with Blender

    NASA Astrophysics Data System (ADS)

    Kent, B. R.

    2015-09-01

    We present the innovative use of Blender, a 3D graphics package, for astronomical visualization. With a Python API and feature rich interface, Blender lends itself well to many 3D data visualization scenarios including data cube rendering, N-body simulations, catalog displays, and surface maps. We focus on the aspects of the software most useful to astronomers such as visual data exploration, applying data to Blender object constructs, and using graphics processing units (GPUs) for rendering. We share examples from both observational data and theoretical models to illustrate how the software can fit into an astronomer's toolkit.

  12. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    NASA Astrophysics Data System (ADS)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  13. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  14. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  15. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  16. Three-dimensional visualization of ensemble weather forecasts - Part 1: The visualization tool Met.3D (version 1.0)

    NASA Astrophysics Data System (ADS)

    Rautenhaus, M.; Kern, M.; Schäfler, A.; Westermann, R.

    2015-07-01

    We present "Met.3D", a new open-source tool for the interactive three-dimensional (3-D) visualization of numerical ensemble weather predictions. The tool has been developed to support weather forecasting during aircraft-based atmospheric field campaigns; however, it is applicable to further forecasting, research and teaching activities. Our work approaches challenging topics related to the visual analysis of numerical atmospheric model output - 3-D visualization, ensemble visualization and how both can be used in a meaningful way suited to weather forecasting. Met.3D builds a bridge from proven 2-D visualization methods commonly used in meteorology to 3-D visualization by combining both visualization types in a 3-D context. We address the issue of spatial perception in the 3-D view and present approaches to using the ensemble to allow the user to assess forecast uncertainty. Interactivity is key to our approach. Met.3D uses modern graphics technology to achieve interactive visualization on standard consumer hardware. The tool supports forecast data from the European Centre for Medium Range Weather Forecasts (ECMWF) and can operate directly on ECMWF hybrid sigma-pressure level grids. We describe the employed visualization algorithms, and analyse the impact of the ECMWF grid topology on computing 3-D ensemble statistical quantities. Our techniques are demonstrated with examples from the T-NAWDEX-Falcon 2012 (THORPEX - North Atlantic Waveguide and Downstream Impact Experiment) campaign.

  17. 3D visual presentation of shoulder joint motion.

    PubMed

    Totterman, S; Tamez-Pena, J; Kwok, E; Strang, J; Smith, J; Rubens, D; Parker, K

    1998-01-01

    The 3D visual presentation of biodynamic events of human joints is a challenging task. Although the 3D reconstruction of high contrast structures from CT data has been widely explored, then there is much less experience in reconstructing the small low contrast soft tissue structures from inhomogeneous and sometimes noisy MR data. Further, there are no algorithms for tracking the motion of moving anatomic structures through MR data. We represent a comprehensive approach to 3D musculoskeletal imagery that addresses these challenges. Specific imaging protocols, segmentation algorithms and rendering techniques are developed and applied to render complex 3D musculoskeletal systems for their 4D visual presentation. Applications of our approach include analysis of rotational motion of the shoulder, the knee flexion, and other complex musculoskeletal motions, and the development of interactive virtual human joints.

  18. Visualizing 3D velocity fields near contour surfaces. Revision 1

    SciTech Connect

    Max, N.; Crawfis, R.; Grant, C.

    1994-08-08

    Vector field rendering is difficult in 3D because the vector icons overlap and hide each other. We propose four different techniques for visualizing vector fields only near surfaces. The first uses motion blurred particles in a thickened region around the surface. The second uses a voxel grid to contain integral curves of the vector field. The third uses many antialiased lines through the surface, and the fourth uses hairs sprouting from the surface and then bending in the direction of the vector field. All the methods use the graphics pipeline, allowing real time rotation and interaction, and the first two methods can animate the texture to move in the flow determined by the velocity field.

  19. Visualizing 3D velocity fields near contour surfaces

    SciTech Connect

    Max, N.; Crawfis, R.; Grant, C.

    1994-03-01

    Vector field rendering is difficult in 3D because the vector icons overlap and hide each other. We propose four different techniques for visualizing vector fields only near surfaces. The first uses motion blurred particles in a thickened region around the surface. The second uses a voxel grid to contain integral curves of the vector field. The third uses many antialiased lines through the surface, and the fourth uses hairs sprouting from the surface and then bending in the direction of the vector field. All the methods use the graphite pipeline, allowing real time rotation and interaction, and the first two methods can animate the texture to move in the flow determined by the velocity field.

  20. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  1. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  2. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  3. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    SciTech Connect

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-09-15

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  4. 3D volume visualization in remote radiation treatment planning

    NASA Astrophysics Data System (ADS)

    Yun, David Y.; Garcia, Hong-Mei C.; Mun, Seong K.; Rogers, James E.; Tohme, Walid G.; Carlson, Wayne E.; May, Stephen; Yagel, Roni

    1996-03-01

    This paper reports a novel applications of 3D visualization in an ARPA-funded remote radiation treatment planning (RTP) experiment, utilizing supercomputer 3D volumetric modeling power and NASA ACTS (Advanced Communication Technology Satellite) communication bandwidths at the Ka-band range. The objective of radiation treatment is to deliver a tumorcidal dose of radiation to a tumor volume while minimizing doses to surrounding normal tissues. High performance graphics computers are required to allow physicians to view a 3D anatomy, specify proposed radiation beams, and evaluate the dose distribution around the tumor. Supercomputing power is needed to compute and even optimize dose distribution according to pre-specified requirements. High speed communications offer possibilities for sharing scarce and expensive computing resources (e.g., hardware, software, personnel, etc.) as well as medical expertise for 3D treatment planning among hospitals. This paper provides initial technical insights into the feasibility of such resource sharing. The overall deployment of the RTP experiment, visualization procedures, and parallel volume rendering in support of remote interactive 3D volume visualization will be described.

  5. Point Cloud Visualization in AN Open Source 3d Globe

    NASA Astrophysics Data System (ADS)

    De La Calle, M.; Gómez-Deck, D.; Koehler, O.; Pulido, F.

    2011-09-01

    During the last years the usage of 3D applications in GIS is becoming more popular. Since the appearance of Google Earth, users are familiarized with 3D environments. On the other hand, nowadays computers with 3D acceleration are common, broadband access is widespread and the public information that can be used in GIS clients that are able to use data from the Internet is constantly increasing. There are currently several libraries suitable for this kind of applications. Based on these facts, and using libraries that are already developed and connected to our own developments, we are working on the implementation of a real 3D GIS with analysis capabilities. Since a 3D GIS such as this can be very interesting for tasks like LiDAR or Laser Scanner point clouds rendering and analysis, special attention is given to get an optimal handling of very large data sets. Glob3 will be a multidimensional GIS in which 3D point clouds could be explored and analysed, even if they are consist of several million points.The latest addition to our visualization libraries is the development of a points cloud server that works regardless of the cloud's size. The server receives and processes petitions from a 3d client (for example glob3, but could be any other, such as one based on WebGL) and delivers the data in the form of pre-processed tiles, depending on the required level of detail.

  6. NIHmagic: 3D visualization, registration, and segmentation tool

    NASA Astrophysics Data System (ADS)

    Freidlin, Raisa Z.; Ohazama, Chikai J.; Arai, Andrew E.; McGarry, Delia P.; Panza, Julio A.; Trus, Benes L.

    2000-05-01

    Interactive visualization of multi-dimensional biological images has revolutionized diagnostic and therapy planning. Extracting complementary anatomical and functional information from different imaging modalities provides a synergistic analysis capability for quantitative and qualitative evaluation of the objects under examination. We have been developing NIHmagic, a visualization tool for research and clinical use, on the SGI OnyxII Infinite Reality platform. Images are reconstructed into a 3D volume by volume rendering, a display technique that employs 3D texture mapping to provide a translucent appearance to the object. A stack of slices is rendered into a volume by an opacity mapping function, where the opacity is determined by the intensity of the voxel and its distance from the viewer. NIHmagic incorporates 3D visualization of time-sequenced images, manual registration of 2D slices, segmentation of anatomical structures, and color-coded re-mapping of intensities. Visualization of MIR, PET, CT, Ultrasound, and 3D reconstructed electron microscopy images has been accomplished using NIHmagic.

  7. New software for visualizing 3D geological data in coal mines

    NASA Astrophysics Data System (ADS)

    Lee, Sungjae; Choi, Yosoon

    2015-04-01

    This study developed new software to visualize 3D geological data in coal mines. The Visualization Tool Kit (VTK) library and Visual Basic.NET 2010 were used to implement the software. The software consists of several modules providing functionalities: (1) importing and editing borehole data; (2) modelling of coal seams in 3D; (3) modelling of coal properties using 3D ordinary Kriging method; (4) calculating economical values of 3D blocks; (5) pit boundary optimization for identifying economical coal reserves based on the Lerchs-Grosmann algorithm; and (6) visualizing 3D geological, geometrical and economical data. The software has been applied to a small-scale open-pit coal mine in Indonesia revealed that it can provide useful information supporting the planning and design of open-pit coal mines.

  8. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  9. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  10. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  11. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    SciTech Connect

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  12. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation.

  13. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. PMID:20533989

  14. Visualization of 3D ensemble weather forecasts to predict uncertain warm conveyor belt situations

    NASA Astrophysics Data System (ADS)

    Rautenhaus, Marc; Grams, Christian M.; Schäfler, Andreas; Westermann, Rüdiger

    2015-04-01

    We present the application of interactive 3D visualization of ensemble weather predictions to forecasting warm conveyor belt (WCB) situations during aircraft-based atmospheric research campaigns under consideration of uncertainty in the forecast. Based on requirements of the 2012 T-NAWDEX-Falcon campaign, a method based on ensemble Lagrangian particle trajectories has been developed to predict 3D probabilities of the spatial occurrence of WCBs. The method has been integrated into the new open-source 3D ensemble visualization tool Met.3D. The integration facilitates interactive visual exploration of predicted WCB features and derived probabilities in the context of ensemble forecasts from the European Centre for Medium Range Weather Forecasts. To judge forecast uncertainty, Met.3D's interactivity enables the user to compute and visualize ensemble statistical quantities on-demand and to navigate the ensemble members. A new visual analysis method to quantitatively analyse the contribution of ensemble members to a probability region assists the forecaster in interpreting the obtained probabilities. In this presentation, we focus on a case study that illustrates how we envision the use of 3D ensemble visualization for weather forecasting. The case study revisits a forecast case from T-NAWDEX-Falcon and demonstrates the practical application of the proposed uncertainty visualization methods.

  15. Automated objective characterization of visual field defects in 3D

    NASA Technical Reports Server (NTRS)

    Fink, Wolfgang (Inventor)

    2006-01-01

    A method and apparatus for electronically performing a visual field test for a patient. A visual field test pattern is displayed to the patient on an electronic display device and the patient's responses to the visual field test pattern are recorded. A visual field representation is generated from the patient's responses. The visual field representation is then used as an input into a variety of automated diagnostic processes. In one process, the visual field representation is used to generate a statistical description of the rapidity of change of a patient's visual field at the boundary of a visual field defect. In another process, the area of a visual field defect is calculated using the visual field representation. In another process, the visual field representation is used to generate a statistical description of the volume of a patient's visual field defect.

  16. Mayavi2: 3D Scientific Data Visualization and Plotting

    NASA Astrophysics Data System (ADS)

    Ramachandran, Prabhu; Varoquaux, Gaël

    2012-05-01

    Mayavi is an open-source, general-purpose, 3D scientific visualization package. It seeks to provide easy and interactive tools for data visualization that fit with the scientific user's workflow. Mayavi provides several entry points: a full-blown interactive application; a Python library with both a MATLAB-like interface focused on easy scripting and a feature-rich object hierarchy; widgets associated with these objects for assembling in a domain-specific application, and plugins that work with a general purpose application-building framework.

  17. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  18. 3-D Flow Visualization of a Turbulent Boundary Layer

    NASA Astrophysics Data System (ADS)

    Thurow, Brian; Williams, Steven; Lynch, Kyle

    2009-11-01

    A recently developed 3-D flow visualization technique is used to visualize large-scale structures in a turbulent boundary layer. The technique is based on the scanning of a laser light sheet through the flow field similar to that of Delo and Smits (1997). High-speeds are possible using a recently developed MHz rate pulse burst laser system, an ultra-high-speed camera capable of 500,000 fps and a galvanometric scanning mirror yielding a total acquisition time of 136 microseconds for a 220 x 220 x 68 voxel image. In these experiments, smoke is seeded into the boundary layer formed on the wall of a low-speed wind tunnel. The boundary layer is approximately 1.5'' thick at the imaging location with a free stream velocity of 24 ft/s yielding a Reynolds number of 18,000 based on boundary layer thickness. The 3-D image volume is approximately 4'' x 4'' x 4''. Preliminary results using 3-D iso-surface visualizations show a collection of elongated large-scale structures inclined in the streamwise direction. The spanwise width of the structures, which are located in the outer region, is on the order of 25 -- 50% of the boundary layer thickness.

  19. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  20. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  1. Comparative visual analysis of 3D urban wind simulations

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Salim, Mohamed; Grawe, David; Leitl, Bernd; Böttinger, Michael; Schlünzen, Heinke

    2016-04-01

    Climate simulations are conducted in large quantity for a variety of different applications. Many of these simulations focus on global developments and study the Earth's climate system using a coupled atmosphere ocean model. Other simulations are performed on much smaller regional scales, to study very small fine grained climatic effects. These microscale climate simulations pose similar, yet also different, challenges for the visualization and the analysis of the simulation data. Modern interactive visualization and data analysis techniques are very powerful tools to assist the researcher in answering and communicating complex research questions. This presentation discusses comparative visualization for several different wind simulations, which were created using the microscale climate model MITRAS. The simulations differ in wind direction and speed, but are all centered on the same simulation domain: An area of Hamburg-Wilhelmsburg that hosted the IGA/IBA exhibition in 2013. The experiments contain a scenario case to analyze the effects of single buildings, as well as examine the impact of the Coriolis force within the simulation. The scenario case is additionally compared with real measurements from a wind tunnel experiment to ascertain the accuracy of the simulation and the model itself. We also compare different approaches for tree modeling and evaluate the stability of the model. In this presentation, we describe not only our workflow to efficiently and effectively visualize microscale climate simulation data using common 3D visualization and data analysis techniques, but also discuss how to compare variations of a simulation and how to highlight the subtle differences in between them. For the visualizations we use a range of different 3D tools that feature techniques for statistical data analysis, data selection, as well as linking and brushing.

  2. Visual storytelling in 2D and stereoscopic 3D video: effect of blur on visual attention

    NASA Astrophysics Data System (ADS)

    Huynh-Thu, Quan; Vienne, Cyril; Blondé, Laurent

    2013-03-01

    Visual attention is an inherent mechanism that plays an important role in the human visual perception. As our visual system has limited capacity and cannot efficiently process the information from the entire visual field, we focus our attention on specific areas of interest in the image for detailed analysis of these areas. In the context of media entertainment, the viewers' visual attention deployment is also influenced by the art of visual storytelling. To this date, visual editing and composition of scenes in stereoscopic 3D content creation still mostly follows those used in 2D. In particular, out-of-focus blur is often used in 2D motion pictures and photography to drive the viewer's attention towards a sharp area of the image. In this paper, we study specifically the impact of defocused foreground objects on visual attention deployment in stereoscopic 3D content. For that purpose, we conducted a subjective experiment using an eyetracker. Our results bring more insights on the deployment of visual attention in stereoscopic 3D content viewing, and provide further understanding on visual attention behavior differences between 2D and 3D. Our results show that a traditional 2D scene compositing approach such as the use of foreground blur does not necessarily produce the same effect on visual attention deployment in 2D and 3D. Implications for stereoscopic content creation and visual fatigue are discussed.

  3. Visualizing 3D Turbulence On Temporally Adaptive Wavelet Collocation Grids

    NASA Astrophysics Data System (ADS)

    Goldstein, D. E.; Kadlec, B. J.; Yuen, D. A.; Erlebacher, G.

    2005-12-01

    Today there is an explosion in data from high-resolution computations of nonlinear phenomena in many fields, including the geo- and environmental sciences. The efficient storage and subsequent visualization of these large data sets is a trade off in storage costs versus data quality. New dynamically adaptive simulation methodologies promise significant computational cost savings and have the added benefit of producing results on adapted grids that significantly reduce storage and data manipulation costs. Yet, with these adaptive simulation methodologies come new challenges in the visualization of temporally adaptive data sets. In this work turbulence data sets from Stochastic Coherent Adaptive Large Eddy Simulations (SCALES) are visualized with the open source tool ParaView, as a challenging case study. SCALES simulations use a temporally adaptive collocation grid defined by wavelet threshold filtering to resolve the most energetic coherent structures in a turbulence field. A subgrid scale model is used to account for the effect of unresolved subgrid scale modes. The results from the SCALES simulations are saved on a thresholded dyadic wavelet collocation grid, which by its nature does not include cell information. Paraview is an open source visualization package developed by KitWare(tm) that is based on the widely used VTK graphics toolkit. The efficient generation of cell information, required with current ParaView data formats, is explored using custom algorithms and VTK toolkit routines. Adaptive 3d visualizations using isosurfaces and volume visualizations are compared with non-adaptive visualizations. To explore the localized multiscale structures in the turbulent data sets the wavelet coefficients are also visualized allowing visualization of energy contained in local physical regions as well as in local wave number space.

  4. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  5. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  6. [3D Super-resolution Reconstruction and Visualization of Pulmonary Nodules from CT Image].

    PubMed

    Wang, Bing; Fan, Xing; Yang, Ying; Tian, Xuedong; Gu, Lixu

    2015-08-01

    The aim of this study was to propose an algorithm for three-dimensional projection onto convex sets (3D POCS) to achieve super resolution reconstruction of 3D lung computer tomography (CT) images, and to introduce multi-resolution mixed display mode to make 3D visualization of pulmonary nodules. Firstly, we built the low resolution 3D images which have spatial displacement in sub pixel level between each other and generate the reference image. Then, we mapped the low resolution images into the high resolution reference image using 3D motion estimation and revised the reference image based on the consistency constraint convex sets to reconstruct the 3D high resolution images iteratively. Finally, we displayed the different resolution images simultaneously. We then estimated the performance of provided method on 5 image sets and compared them with those of 3 interpolation reconstruction methods. The experiments showed that the performance of 3D POCS algorithm was better than that of 3 interpolation reconstruction methods in two aspects, i.e., subjective and objective aspects, and mixed display mode is suitable to the 3D visualization of high resolution of pulmonary nodules.

  7. Interactive 3D visualization speeds well, reservoir planning

    SciTech Connect

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinite reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.

  8. 3D photo mosaicing of Tagiri shallow vent field by an autonomous underwater vehicle (3rd report) - Mosaicing method based on navigation data and visual features -

    NASA Astrophysics Data System (ADS)

    Maki, Toshihiro; Ura, Tamaki; Singh, Hanumant; Sakamaki, Takashi

    Large-area seafloor imaging will bring significant benefits to various fields such as academics, resource survey, marine development, security, and search-and-rescue. The authors have proposed a navigation method of an autonomous underwater vehicle for seafloor imaging, and verified its performance through mapping tubeworm colonies with the area of 3,000 square meters using the AUV Tri-Dog 1 at Tagiri vent field, Kagoshima bay in Japan (Maki et al., 2008, 2009). This paper proposes a post-processing method to build a natural photo mosaic from a number of pictures taken by an underwater platform. The method firstly removes lens distortion, invariances of color and lighting from each image, and then ortho-rectification is performed based on camera pose and seafloor estimated by navigation data. The image alignment is based on both navigation data and visual characteristics, implemented as an expansion of the image based method (Pizarro et al., 2003). Using the two types of information realizes an image alignment that is consistent both globally and locally, as well as making the method applicable to data sets with little visual keys. The method was evaluated using a data set obtained by the AUV Tri-Dog 1 at the vent field in Sep. 2009. A seamless, uniformly illuminated photo mosaic covering the area of around 500 square meters was created from 391 pictures, which covers unique features of the field such as bacteria mats and tubeworm colonies.

  9. Combined 3-D auditory-visual cueing for a visual target acquisition task

    NASA Astrophysics Data System (ADS)

    Westergren, Rachael L.; Havig, Paul R.; Heft, Eric L.

    2007-04-01

    Previous studies have shown that helmet-mounted displays (HMDs) are advantageous in maintaining situation awareness and increasing the amount of time pilots spend looking off-boresight (Geiselman & Osgood, 1994; Geiselman & Osgood, 1995). However, space is also limited on a HMD and any symbology that is presented takes up valuable space and can occlude a pilot's vision. There has been much research in the area of visual cueing and visual search as they relate to seeking out visual targets in the sky. However, the idea of localized auditory cueing, as it could apply in the realm of air-to-air targeting, is an area less studied. One question is how can we present information such that a pilot's attention will be directed to the object of interest the most quickly? Some different types of target location cueing symbology have been studied to find such aspects of symbology that will aid a pilot most in acquiring a target. The purpose of this study is to determine the best method of cueing a person to visual targets in the shortest amount of time possible using auditory and visual cues in combination. Specifically, participants were presented with different combinations of reflected line cues, standard line cues, and localized auditory cues for primary and secondary targets. The cues were presented using an HMD and 3-D auditory headphones, with a magnetic head tracker used to determine when the participant had visually acquired the targets. The possible benefits of these cues based on the times to acquire are discussed.

  10. JHelioviewer: Visualizing the Sun and Heliosphere in 3D

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Spoerri, S.; Pagel, S.

    2012-12-01

    The next generation of heliophysics missions, Solar Orbiter and Solar Probe Plus, will focus on exploring the linkage between the Sun and the heliosphere. These new missions will collect unique data that will allow us to study, e.g., the coupling between macroscopic physical processes to those on kinetic scales, the generation of solar energetic particles and their propagation into the heliosphere and the origin and acceleration of solar wind plasma. Already today, NASA's Solar Dynamics Observatory returns 1.4 TB/day of high-resolution solar images, magnetograms and EUV irradiance data. Within a few years, the scientific community will thus have access to petabytes of multidimensional remote-sensinng and complex in-situ observations from different vantage points, complemented by petabytes of simulation data. Answering overarching science questions like "How do solar transients drive heliospheric variability and space weather?" will only be possible if the community has the necessary tools at hand. As of today, there is an obvious lack of capability to both visualize these data and assimilate them into sophisticated models to advance our knowledge. A key piece needed to bridge the gap between observables, derived quantities like vector fields and model output is a tool to routinely and intuitively visualize large heterogeneous, multidimensional, time-dependent data sets. While a few tools exist to visualize, e.g., 3D data sets for a small number of time steps, the space sciences community is lacking the equipment to do this (i) on a routine basis, (ii) for complex multidimensional data sets from various instruments and vantage points and (iii) in an extensible and modular way that is open for future improvements and interdisciplinary usage. In this contribution, we will present recent progress in visualizing the Sun and its magnetic field in 3D using the open-source JHelioviewer framework, which is part of the ESA/NASA Helioviewer Project. Among other features

  11. Planetary subsurface investigation by 3D visualization model .

    NASA Astrophysics Data System (ADS)

    Seu, R.; Catallo, C.; Tragni, M.; Abbattista, C.; Cinquepalmi, L.

    Subsurface data analysis and visualization represents one of the main aspect in Planetary Observation (i.e. search for water or geological characterization). The data are collected by subsurface sounding radars as instruments on-board of deep space missions. These data are generally represented as 2D radargrams in the perspective of space track and z axes (perpendicular to the subsurface) but without direct correlation to other data acquisition or knowledge on the planet . In many case there are plenty of data from other sensors of the same mission, or other ones, with high continuity in time and in space and specially around the scientific sites of interest (i.e. candidate landing areas or particular scientific interesting sites). The 2D perspective is good to analyse single acquisitions and to perform detailed analysis on the returned echo but are quite useless to compare very large dataset as now are available on many planets and moons of solar system. The best way is to approach the analysis on 3D visualization model generated from the entire stack of data. First of all this approach allows to navigate the subsurface in all directions and analyses different sections and slices or moreover navigate the iso-surfaces respect to a value (or interval). The last one allows to isolate one or more iso-surfaces and remove, in the visualization mode, other data not interesting for the analysis; finally it helps to individuate the underground 3D bodies. Other aspect is the needs to link the on-ground data, as imaging, to the underground one by geographical and context field of view.

  12. Method and simulation to study 3D crosstalk perception

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Blondé, Laurent; Huynh-Thu, Quan; Vienne, Cyril; Doyen, Didier

    2012-03-01

    To various degrees, all modern 3DTV displays suffer from crosstalk, which can lead to a decrease of both visual quality and visual comfort, and also affect perception of depth. In the absence of a perfect 3D display technology, crosstalk has to be taken into account when studying perception of 3D stereoscopic content. In order to improve 3D presentation systems and understand how to efficiently eliminate crosstalk, it is necessary to understand its impact on human perception. In this paper, we present a practical method to study the perception of crosstalk. The approach consists of four steps: (1) physical measurements of a 3DTV, (2) building of a crosstalk surface based on those measurements and representing specifically the behavior of that 3TV, (3) manipulation of the crosstalk function and application on reference images to produce test images degraded by crosstalk in various ways, and (4) psychophysical tests. Our approach allows both a realistic representation of the behavior of a 3DTV and the easy manipulation of its resulting crosstalk in order to conduct psycho-visual experiments. Our approach can be used in all studies requiring the understanding of how crosstalk affects perception of stereoscopic content and how it can be corrected efficiently.

  13. A geoscience perspective on immersive 3D gridded data visualization

    NASA Astrophysics Data System (ADS)

    Billen, Magali I.; Kreylos, Oliver; Hamann, Bernd; Jadamec, Margarete A.; Kellogg, Louise H.; Staadt, Oliver; Sumner, Dawn Y.

    2008-09-01

    We describe visualization software, Visualizer, that was developed specifically for interactive, visual exploration in immersive virtual reality (VR) environments. Visualizer uses carefully optimized algorithms and data structures to support the high frame rates required for immersion and the real-time feedback required for interactivity. As an application developed for VR from the ground up, Visualizer realizes benefits that usually cannot be achieved by software initially developed for the desktop and later ported to VR. However, Visualizer can also be used on desktop systems (unix/linux-based operating systems including Mac OS X) with a similar level of real-time interactivity, bridging the "software gap" between desktop and VR that has been an obstacle for the adoption of VR methods in the Geosciences. While many of the capabilities of Visualizer are already available in other software packages used in a desktop environment, the features that distinguish Visualizer are: (1) Visualizer can be used in any VR environment including the desktop, GeoWall, or CAVE, (2) in non-desktop environments the user interacts with the data set directly using a wand or other input devices instead of working indirectly via dialog boxes or text input, (3) on the desktop, Visualizer provides real-time interaction with very large data sets that cannot easily be viewed or manipulated in other software packages. Three case studies are presented that illustrate the direct scientific benefits realized by analyzing data or simulation results with Visualizer in a VR environment. We also address some of the main obstacles to widespread use of VR environments in scientific research with a user study that shows Visualizer is easy to learn and to use in a VR environment and can be as effective on desktop systems as native desktop applications.

  14. Novel 3D Compression Methods for Geometry, Connectivity and Texture

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2016-06-01

    A large number of applications in medical visualization, games, engineering design, entertainment, heritage, e-commerce and so on require the transmission of 3D models over the Internet or over local networks. 3D data compression is an important requirement for fast data storage, access and transmission within bandwidth limitations. The Wavefront OBJ (object) file format is commonly used to share models due to its clear simple design. Normally each OBJ file contains a large amount of data (e.g. vertices and triangulated faces, normals, texture coordinates and other parameters) describing the mesh surface. In this paper we introduce a new method to compress geometry, connectivity and texture coordinates by a novel Geometry Minimization Algorithm (GM-Algorithm) in connection with arithmetic coding. First, each vertex ( x, y, z) coordinates are encoded to a single value by the GM-Algorithm. Second, triangle faces are encoded by computing the differences between two adjacent vertex locations, which are compressed by arithmetic coding together with texture coordinates. We demonstrate the method on large data sets achieving compression ratios between 87 and 99 % without reduction in the number of reconstructed vertices and triangle faces. The decompression step is based on a Parallel Fast Matching Search Algorithm (Parallel-FMS) to recover the structure of the 3D mesh. A comparative analysis of compression ratios is provided with a number of commonly used 3D file formats such as VRML, OpenCTM and STL highlighting the performance and effectiveness of the proposed method.

  15. Visualizing 3D fracture morphology in granular media

    NASA Astrophysics Data System (ADS)

    Dalbe, Marie-Julie; Juanes, Ruben

    2015-11-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  16. Visualizing 3D Fracture Morphology in Granular Media

    NASA Astrophysics Data System (ADS)

    Dalbe, M. J.; Juanes, R.

    2015-12-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  17. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  18. A workflow for the 3D visualization of meteorological data

    NASA Astrophysics Data System (ADS)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  19. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  20. Web Based Interactive Anaglyph Stereo Visualization of 3D Model of Geoscience Data

    NASA Astrophysics Data System (ADS)

    Han, J.

    2014-12-01

    The objectives of this study were to create interactive online tool for generating and viewing the anaglyph 3D stereo image on a Web browser via Internet. To achieve this, we designed and developed the prototype system. Three-dimensional visualization is well known and becoming popular in recent years to understand the target object and the related physical phenomena. Geoscience data have the complex data model, which combines large extents with rich small scale visual details. So, the real-time visualization of 3D geoscience data model on the Internet is a challenging work. In this paper, we show the result of creating which can be viewed in 3D anaglyph of geoscience data in any web browser which supports WebGL. We developed an anaglyph image viewing prototype system, and some representative results are displayed by anaglyph 3D stereo image generated in red-cyan colour from pairs of air-photo/digital elevation model and geological map/digital elevation model respectively. The best viewing is achieved by using suitable 3D red-cyan glasses, although alternatively red-blue or red-green spectacles can be also used. The middle mouse wheel can be used to zoom in/out the anaglyph image on a Web browser. Application of anaglyph 3D stereo image is a very important and easy way to understand the underground geologic system and active tectonic geomorphology. The integrated strata with fine three-dimensional topography and geologic map data can help to characterise the mineral potential area and the active tectonic abnormal characteristics. To conclude, it can be stated that anaglyph 3D stereo image provides a simple and feasible method to improve the relief effect of geoscience data such as geomorphology and geology. We believe that with further development, the anaglyph 3D stereo imaging system could as a complement to 3D geologic modeling, constitute a useful tool for better understanding of the underground geology and the active tectonic

  1. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    PubMed Central

    2010-01-01

    Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623

  2. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  3. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  4. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  5. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  6. 3D visualization of unsteady 2D airplane wake vortices

    NASA Technical Reports Server (NTRS)

    Ma, Kwan-Liu; Zheng, Z. C.

    1994-01-01

    Air flowing around the wing tips of an airplane forms horizontal tornado-like vortices that can be dangerous to following aircraft. The dynamics of such vortices, including ground and atmospheric effects, can be predicted by numerical simulation, allowing the safety and capacity of airports to be improved. In this paper, we introduce three-dimensional techniques for visualizing time-dependent, two-dimensional wake vortex computations, and the hazard strength of such vortices near the ground. We describe a vortex core tracing algorithm and a local tiling method to visualize the vortex evolution. The tiling method converts time-dependent, two-dimensional vortex cores into three-dimensional vortex tubes. Finally, a novel approach calculates the induced rolling moment on the following airplane at each grid point within a region near the vortex tubes and thus allows three-dimensional visualization of the hazard strength of the vortices. We also suggest ways of combining multiple visualization methods to present more information simultaneously.

  7. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  8. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  9. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  10. A perceptual preprocess method for 3D-HEVC

    NASA Astrophysics Data System (ADS)

    Shi, Yawen; Wang, Yongfang; Wang, Yubing

    2015-08-01

    A perceptual preprocessing method for 3D-HEVC coding is proposed in the paper. Firstly we proposed a new JND model, which accounts for luminance contrast masking effect, spatial masking effect, and temporal masking effect, saliency characteristic as well as depth information. We utilize spectral residual approach to obtain the saliency map and built a visual saliency factor based on saliency map. In order to distinguish the sensitivity of objects in different depth. We segment each texture frame into foreground and background by a automatic threshold selection algorithm using corresponding depth information, and then built a depth weighting factor. A JND modulation factor is built with a linear combined with visual saliency factor and depth weighting factor to adjust the JND threshold. Then, we applied the proposed JND model to 3D-HEVC for residual filtering and distortion coefficient processing. The filtering process is that the residual value will be set to zero if the JND threshold is greater than residual value, or directly subtract the JND threshold from residual value if JND threshold is less than residual value. Experiment results demonstrate that the proposed method can achieve average bit rate reduction of 15.11%, compared to the original coding scheme with HTM12.1, while maintains the same subjective quality.

  11. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  12. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  13. Tools for 3D scientific visualization in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    The purpose is to describe the tools and techniques in use at the NASA Ames Research Center for performing visualization of computational aerodynamics, for example visualization of flow fields from computer simulations of fluid dynamics about vehicles such as the Space Shuttle. The hardware used for visualization is a high-performance graphics workstation connected to a super computer with a high speed channel. At present, the workstation is a Silicon Graphics IRIS 3130, the supercomputer is a CRAY2, and the high speed channel is a hyperchannel. The three techniques used for visualization are post-processing, tracking, and steering. Post-processing analysis is done after the simulation. Tracking analysis is done during a simulation but is not interactive, whereas steering analysis involves modifying the simulation interactively during the simulation. Using post-processing methods, a flow simulation is executed on a supercomputer and, after the simulation is complete, the results of the simulation are processed for viewing. The software in use and under development at NASA Ames Research Center for performing these types of tasks in computational aerodynamics is described. Workstation performance issues, benchmarking, and high-performance networks for this purpose are also discussed as well as descriptions of other hardware for digital video and film recording.

  14. Educational Visualizations in 3D Collaborative Virtual Environments: A Methodology

    ERIC Educational Resources Information Center

    Fominykh, Mikhail; Prasolova-Forland, Ekaterina

    2012-01-01

    Purpose: Collaborative virtual environments (CVEs) have become increasingly popular in educational settings and the role of 3D content is becoming more and more important. Still, there are many challenges in this area, such as lack of empirical studies that provide design for educational activities in 3D CVEs and lack of norms of how to support…

  15. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  16. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    SciTech Connect

    Data Analysis and Visualization and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  17. Visual Short-Term Memory Benefit for Objects on Different 3-D Surfaces

    ERIC Educational Resources Information Center

    Xu, Yaoda; Nakayama, Ken

    2007-01-01

    Visual short-term memory (VSTM) plays an important role in visual cognition. Although objects are located on different 3-dimensional (3-D) surfaces in the real world, how VSTM capacity may be influenced by the presence of multiple 3-D surfaces has never been examined. By manipulating binocular disparities of visual displays, the authors found that…

  18. 3D Face modeling using the multi-deformable method.

    PubMed

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  19. 3D Face Modeling Using the Multi-Deformable Method

    PubMed Central

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-01-01

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper. PMID:23201976

  20. NASA VERVE: Interactive 3D Visualization Within Eclipse

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar; Allan, Mark B.

    2014-01-01

    At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.

  1. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data

  2. Self-Discovery of Structural Geology Concepts using Interactive 3D Visualization

    NASA Astrophysics Data System (ADS)

    Billen, M. I.; Saunders, J.

    2010-12-01

    Mastering structural geology concepts that depend on understanding three-dimensional (3D) geometries and imagining relationships among unseen subsurface structures are fundamental skills for geologists. Traditionally these skills are developed first, through use of 2D drawings of 3D structures that can be difficult to decipher or 3D physical block models that show only a limited set of relationships on the surfaces of the blocks, followed by application and testing of concepts in field settings. We hypothesize that this learning process can be improved by providing repeated opportunities to evaluate and explore synthetic 3D structures using interactive 3D visualization software. We present laboratory modules designed for undergraduate structural geology curriculum using a self-discovery approach to teach concepts such as: the Rule of V’s, structure separation versus fault slip, and the more general dependence of structural exposure on surface topography. The laboratory modules are structured to allow students to discover and articulate each concept from observations of synthetic data both on traditional maps and using the volume visualization software 3DVisualizer. Modules lead students through exploration of data (e.g., a dipping layered structure exposed in ridge-valley topography or obliquely offset across a fault) by allowing them to interactively view (rotate, pan, zoom) the exposure of structures on topographic surfaces and to toggle on/off the full 3D structure as a transparent colored volume. This tool allows student to easily visually understand the relationships between, for example a dipping structure and its exposure on valley walls, as well as how the structure extends beneath the surface. Using this method gives students more opportunities to build a mental library of previously-seen relationships from which to draw-on when applying concepts in the field setting. These laboratory modules, the data and software are freely available from KeckCAVES.

  3. A method for the calibration of 3D ultrasound transducers

    NASA Astrophysics Data System (ADS)

    Hastenteufel, Mark; Mottl-Link, Sibylle; Wolf, Ivo; de Simone, Raffaele; Meinzer, Hans-Peter

    2003-05-01

    Background: Three-dimensional (3D) ultrasound has a great potential in medical diagnostics. However, there are also some limitations of 3D ultrasound, e.g., in some situations morphology cannot be imaged accurately due to acoustical shadows. Acquiring 3D datasets from multiple positions can overcome some of these limitations. Prior to that a calibration of the ultrasound probe is necessary. Most calibration methods descibed rely on two-dimensional data. We describe a calibration method that uses 3D data. Methods: We have developed a 3D calibration method based on single-point cross-wire calibration using registration techniques for automatic detection of cross centers. For the calibration a cross consisting of three orthogonal wires is imaged. A model-to-image registration method is used to determine the cross center. Results: Due to the use of 3D data less acquisitions and no special protocols are necessary. The influence of noise is reduced. By means of the registration method the time-consuming steps of image plane alignment and manual cross center determination becomes dispensable. Conclusion: A 3D calibration method for ultrasound transducers is described. The calibration method is the base to extend state-of-the-art 3D ultrasound devices, i.e., to acquire multiple 3D, either morphological or functional (Doppler), datasets.

  4. Visualizing the process of interaction in a 3D environment

    NASA Astrophysics Data System (ADS)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Krishnan, Kajoli; Mullick, Rakesh

    2007-03-01

    As the imaging modalities used in medicine transition to increasingly three-dimensional data the question of how best to interact with and analyze this data becomes ever more pressing. Immersive virtual reality systems seem to hold promise in tackling this, but how individuals learn and interact in these environments is not fully understood. Here we will attempt to show some methods in which user interaction in a virtual reality environment can be visualized and how this can allow us to gain greater insight into the process of interaction/learning in these systems. Also explored is the possibility of using this method to improve understanding and management of ergonomic issues within an interface.

  5. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  6. Pancreaticoduodenectomy assisted by 3-D visualization reconstruction and portal vein arterialization

    PubMed Central

    Su, Zhao-jie; Li, Wen-gang; Huang, Jun-li; Xiao, Lin-feng; Chen, Fu-zhen; Wang, Bo-liang

    2016-01-01

    Abstract Background: Three-dimensional visualization reconstruction, the 3-D visualization model reconstructed by software using 2-D CT images, has been widely applied in medicine; but it has rarely been applied in pancreaticoduodenectomy. Although the hepatic artery is very important for the liver, it has to be removed when tumor invades it. Therefore, portal vein arterialization has been used in clinic as a remedial measure, but there still is professional debate on portal vein arterialization. Methods: Here, we report 1 case that was diagnosed with poorly differentiated adenocarcinoma of the duodenum. The tumor had large size and invaded surrounding organs and vessels. Results: Preliminary diagnoses were poorly differentiated adenocarcinoma of the duodenum and viral hepatitis B. Pancreaticoduodenectomy assisted by 3-D visualization reconstruction and portal vein arterialization were performed in this case. The tumor was removed. Liver function returned to normal limits 1 week after operation. Digital subtraction arteriography showed compensatory artery branches within the liver 1 month after operation. Conclusion: 3-D visualization reconstruction can provide a reliable assistance for the accurate assessment and surgical design before pancreatoduodenectomy, and it is certainly worth adopting portal vein arterialization when retention of hepatic artery is impossible or conventional arterial anastomosis is required during pancreatoduodenectomy. PMID:27603365

  7. Tensor3D: A computer graphics program to simulate 3D real-time deformation and visualization of geometric bodies

    NASA Astrophysics Data System (ADS)

    Pallozzi Lavorante, Luca; Dirk Ebert, Hans

    2008-07-01

    Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities.

  8. Method for 3D Airway Topology Extraction

    PubMed Central

    Grothausmann, Roman; Kellner, Manuela; Heidrich, Marko; Lorbeer, Raoul-Amadeus; Ripken, Tammo; Meyer, Heiko; Kuehnel, Mark P.; Ochs, Matthias; Rosenhahn, Bodo

    2015-01-01

    In lungs the number of conducting airway generations as well as bifurcation patterns varies across species and shows specific characteristics relating to illnesses or gene variations. A method to characterize the topology of the mouse airway tree using scanning laser optical tomography (SLOT) tomograms is presented in this paper. It is used to test discrimination between two types of mice based on detected differences in their conducting airway pattern. Based on segmentations of the airways in these tomograms, the main spanning tree of the volume skeleton is computed. The resulting graph structure is used to distinguish between wild type and surfactant protein (SP-D) deficient knock-out mice. PMID:25767561

  9. a Fast Method for Measuring the Similarity Between 3d Model and 3d Point Cloud

    NASA Astrophysics Data System (ADS)

    Zhang, Zongliang; Li, Jonathan; Li, Xin; Lin, Yangbin; Zhang, Shanxin; Wang, Cheng

    2016-06-01

    This paper proposes a fast method for measuring the partial Similarity between 3D Model and 3D point Cloud (SimMC). It is crucial to measure SimMC for many point cloud-related applications such as 3D object retrieval and inverse procedural modelling. In our proposed method, the surface area of model and the Distance from Model to point Cloud (DistMC) are exploited as measurements to calculate SimMC. Here, DistMC is defined as the weighted distance of the distances between points sampled from model and point cloud. Similarly, Distance from point Cloud to Model (DistCM) is defined as the average distance of the distances between points in point cloud and model. In order to reduce huge computational burdens brought by calculation of DistCM in some traditional methods, we define SimMC as the ratio of weighted surface area of model to DistMC. Compared to those traditional SimMC measuring methods that are only able to measure global similarity, our method is capable of measuring partial similarity by employing distance-weighted strategy. Moreover, our method is able to be faster than other partial similarity assessment methods. We demonstrate the superiority of our method both on synthetic data and laser scanning data.

  10. Real-time auto-stereoscopic visualization of 3D medical images

    NASA Astrophysics Data System (ADS)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  11. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-11-12

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  12. Scalable nanohelices for predictive studies and enhanced 3D visualization.

    PubMed

    Meagher, Kwyn A; Doblack, Benjamin N; Ramirez, Mercedes; Davila, Lilian P

    2014-01-01

    Spring-like materials are ubiquitous in nature and of interest in nanotechnology for energy harvesting, hydrogen storage, and biological sensing applications. For predictive simulations, it has become increasingly important to be able to model the structure of nanohelices accurately. To study the effect of local structure on the properties of these complex geometries one must develop realistic models. To date, software packages are rather limited in creating atomistic helical models. This work focuses on producing atomistic models of silica glass (SiO₂) nanoribbons and nanosprings for molecular dynamics (MD) simulations. Using an MD model of "bulk" silica glass, two computational procedures to precisely create the shape of nanoribbons and nanosprings are presented. The first method employs the AWK programming language and open-source software to effectively carve various shapes of silica nanoribbons from the initial bulk model, using desired dimensions and parametric equations to define a helix. With this method, accurate atomistic silica nanoribbons can be generated for a range of pitch values and dimensions. The second method involves a more robust code which allows flexibility in modeling nanohelical structures. This approach utilizes a C++ code particularly written to implement pre-screening methods as well as the mathematical equations for a helix, resulting in greater precision and efficiency when creating nanospring models. Using these codes, well-defined and scalable nanoribbons and nanosprings suited for atomistic simulations can be effectively created. An added value in both open-source codes is that they can be adapted to reproduce different helical structures, independent of material. In addition, a MATLAB graphical user interface (GUI) is used to enhance learning through visualization and interaction for a general user with the atomistic helical structures. One application of these methods is the recent study of nanohelices via MD simulations for

  13. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    PubMed

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  14. Visualization of hepatic arteries with 3D ultrasound during intra-arterial therapies

    NASA Astrophysics Data System (ADS)

    Gérard, Maxime; Tang, An; Badoual, Anaïs.; Michaud, François; Bigot, Alexandre; Soulez, Gilles; Kadoury, Samuel

    2016-03-01

    Liver cancer represents the second most common cause of cancer-related mortality worldwide. The prognosis is poor with an overall mortality of 95%. Moreover, most hepatic tumors are unresectable due to their advanced stage at discovery or poor underlying liver function. Tumor embolization by intra-arterial approaches is the current standard of care for advanced cases of hepatocellular carcinoma. These therapies rely on the fact that the blood supply of primary hepatic tumors is predominantly arterial. Feedback on blood flow velocities in the hepatic arteries is crucial to ensure maximal treatment efficacy on the targeted masses. Based on these velocities, the intra-arterial injection rate is modulated for optimal infusion of the chemotherapeutic drugs into the tumorous tissue. While Doppler ultrasound is a well-documented technique for the assessment of blood flow, 3D visualization of vascular anatomy with ultrasound remains challenging. In this paper we present an image-guidance pipeline that enables the localization of the hepatic arterial branches within a 3D ultrasound image of the liver. A diagnostic Magnetic resonance angiography (MRA) is first processed to automatically segment the hepatic arteries. A non-rigid registration method is then applied on the portal phase of the MRA volume with a 3D ultrasound to enable the visualization of the 3D mesh of the hepatic arteries in the Doppler images. To evaluate the performance of the proposed workflow, we present initial results from porcine models and patient images.

  15. 3D model of the Bernese Part of the Swiss Molasse Basin: visualization of uncertainties in a 3D model

    NASA Astrophysics Data System (ADS)

    Mock, Samuel; Allenbach, Robin; Reynolds, Lance; Wehrens, Philip; Kurmann-Matzenauer, Eva; Kuhn, Pascal; Michael, Salomè; Di Tommaso, Gennaro; Herwegh, Marco

    2016-04-01

    The Swiss Molasse Basin comprises the western and central part of the North Alpine Foreland Basin. In recent years it has come under closer scrutiny due to its promising geopotentials such as geothermal energy and CO2 sequestration. In order to adress these topics good knowledge of the subsurface is a key prerequisite. For that matter, geological 3D models serve as valuable tools. In collaboration with the Swiss Geological Survey (swisstopo) and as part of the project GeoMol CH, a geological 3D model of the Swiss Molasse Basin in the Canton of Bern has been built. The model covers an area of 1810 km2and reaches depth of up to 6.7 km. It comprises 10 major Cenozoic and Mesozoic units and numerous faults. The 3D model is mainly based on 2D seismic data complemented by information from few deep wells. Additionally, data from geological maps and profiles were used for refinement at shallow depths. In total, 1163 km of reflection seismic data, along 77 seismic lines, have been interpreted by different authors with respect to stratigraphy and structures. Both, horizons and faults, have been interpreted in 2D and modelled in 3D using IHS's Kingdom Suite and Midland Valley's MOVE software packages, respectively. Given the variable degree of subsurface information available, each 3D model is subject of uncertainty. With the primary input data coming from interpretation of reflection seismic data, a variety of uncertainties comes into play. Some of them are difficult to address (e.g. author's style of interpretation) while others can be quantified (e.g. mis-tie correction, well-tie). An important source of uncertainties is the quality of seismic data; this affects the traceability and lateral continuation of seismic reflectors. By defining quality classes we can semi-quantify this source of uncertainty. In order to visualize the quality and density of the input data in a meaningful way, we introduce quality-weighted data density maps. In combination with the geological 3D

  16. Registration and 3D visualization of large microscopy images

    NASA Astrophysics Data System (ADS)

    Mosaliganti, Kishore; Pan, Tony; Sharp, Richard; Ridgway, Randall; Iyengar, Srivathsan; Gulacy, Alexandra; Wenzel, Pamela; de Bruin, Alain; Machiraju, Raghu; Huang, Kun; Leone, Gustavo; Saltz, Joel

    2006-03-01

    Inactivation of the retinoblastoma gene in mouse embryos causes tissue infiltrations into critical sections of the placenta, which has been shown to affect fetal survivability. Our collaborators in cancer genetics are extremely interested in examining the three dimensional nature of these infiltrations given a stack of two dimensional light microscopy images. Three sets of wildtype and mutant placentas was sectioned serially and digitized using a commercial light microscopy scanner. Each individual placenta dataset consisted of approximately 1000 images totaling 700 GB in size, which were registered into a volumetric dataset using National Library of Medicine's (NIH/NLM) Insight Segmentation and Registration Toolkit (ITK). This paper describes our method for image registration to aid in volume visualization of tissue level intermixing for both wildtype and Rb - specimens. The registration process faces many challenges arising from the large image sizes, damages during sectioning, staining gradients both within and across sections, and background noise. These issues limit the direct application of standard registration techniques due to frequent convergence to local solutions. In this work, we develop a mixture of automated and semi-automated enhancements with ground-truth validation for the mutual information-based registration algorithm. Our final volume renderings clearly show tissue intermixing differences between both wildtype and Rb - specimens which are not obvious prior to registration.

  17. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  18. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  19. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts.

  20. Towards robust 3D visual tracking for motion compensation in beating heart surgery.

    PubMed

    Richa, Rogério; Bó, Antônio P L; Poignet, Philippe

    2011-06-01

    In the context of minimally invasive cardiac surgery, active vision-based motion compensation schemes have been proposed for mitigating problems related to physiological motion. However, robust and accurate visual tracking remains a difficult task. The purpose of this paper is to present a robust visual tracking method that estimates the 3D temporal and spatial deformation of the heart surface using stereo endoscopic images. The novelty is the combination of a visual tracking method based on a Thin-Plate Spline (TPS) model for representing the heart surface deformations with a temporal heart motion model based on a time-varying dual Fourier series for overcoming tracking disturbances or failures. The considerable improvements in tracking robustness facing specular reflections and occlusions are demonstrated through experiments using images of in vivo porcine and human beating hearts. PMID:21277821

  1. Visualization package for 3D laser-scanned geometry

    NASA Astrophysics Data System (ADS)

    Neumann, Paul F.; Sadler, Lewis L.

    1993-06-01

    A computer software package named LEGO was designed and implemented to enable medical personnel to explore and manipulate laser scanned 3D geometry obtained from a Cyberware 4020PS scanner. This type of scanner reconstructs a real world object into a mathematical computer model by collecting thousands of depth measurement using a low powered laser. LEGO consists of a collection of tools that can be interactively combined to accomplish complex tasks. Tools fall into five major categories: viewing, simple, quantitative, manipulative, and miscellaneous. This paper is based on a masters thesis obtained from the University of Illinois at Chicago.

  2. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images. PMID:27410090

  3. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  4. KENO3D Visualization Tool for KENO V.a and KENO-VI Geometry Models

    SciTech Connect

    Horwedel, J.E.; Bowman, S.M.

    2000-06-01

    Criticality safety analyses often require detailed modeling of complex geometries. Effective visualization tools can enhance checking the accuracy of these models. This report describes the KENO3D visualization tool developed at the Oak Ridge National Laboratory (ORNL) to provide visualization of KENO V.a and KENO-VI criticality safety models. The development of KENO3D is part of the current efforts to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system.

  5. 3D visualization for the MARS14 Code

    SciTech Connect

    Rzepecki, Jaroslaw P.; Kostin, Mikhail A; Mokhov, Nikolai V.

    2003-01-23

    A new three-dimensional visualization engine has been developed for the MARS14 code system. It is based on the OPENINVENTOR graphics library and integrated with the MARS built-in two-dimensional Graphical-User Interface, MARS-GUI-SLICE. The integrated package allows thorough checking of complex geometry systems and their fragments, materials, magnetic fields, particle tracks along with a visualization of calculated 2-D histograms. The algorithms and their optimization are described for two geometry classes along with examples in accelerator and detector applications.

  6. Trapezoidal phase-shifting method for 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Huang, Peisen S.; Zhang, Song; Chiang, Fu-Pen

    2004-12-01

    We propose a novel structured light method, namely trapezoidal phase-shifting method, for 3-D shape measurement. This method uses three patterns coded with phase-shifted, trapezoidal-shaped gray levels. The 3-D information of the object is extracted by direct calculation of an intensity ratio. Theoretical analysis showed that this new method was significantly less sensitive to the defocusing effect of the captured images when compared to the traditional intensity-ratio based methods. This important advantage makes large-depth 3-D shape measurement possible. If compared to the sinusoidal phase-shifting method, the resolution is similar, but the processing speed is at least 4.5 times faster. The feasibility of this method was demonstrated in a previously developed real-time 3-D shape measurement system. The reconstructed 3-D results showed similar quality as those obtained by the sinusoidal phase-shifting method. However, since the processing speed was much faster, we were able to not only acquire the images in real time, but also reconstruct the 3-D shapes in real time (40 fps at a resolution of 532 x 500 pixels). This real-time capability allows us to measure dynamically changing objects, such as human faces. The potential applications of this new method include industrial inspection, reverse engineering, robotic vision, computer graphics, medical diagnosis, etc.

  7. Evaluation of three 3D US calibration methods

    NASA Astrophysics Data System (ADS)

    Hummel, Johann; Kaar, Marcus; Hoffmann, Rainer; Bhatia, Amon; Birkfellner, Wolfgang; Figl, Michael

    2013-03-01

    With the introduction of 3D US image devices the demand for accurate and fast 3D calibration methods arose. We implemented three different calibration methods and compared the calibration results in terms of fiducial registration error (FRE) and target registration error (TRE). The three calibration methods included a multi-points phantom (MP), a feature based model (FM) and a membrane model (MM). With respect to the sphere method a simple point-to-point registration was applied. For the feature based model we employed a phantom consisting of spheres, pyramids and cones. These objects were imaged from different angles and a 3D3D registration was applied for all possible image combinations. The last method was accomplished by imaging a simple membrane which allows for calculation of the calibration matrix. For a first evaluation we computed the FRE for each method. To assess the calibration success on real patient data we used ten 3D3D registrations between images from the prostate. The FRE for the sphere method amounted to 1.40 mm, for the figure method to 1.05 mm and with respect to the membrane method to 1.12 mm. The deviation arising from ten 3D3D patient registration were 3.44 mm (MP), 2.93 mm (FM)and 2.84 mm (MM). The MM revealed to be the most accurate of the evaluated procedure while the MP has shown significant higher errors. The results from FM were close to the one from MM and also significant better than the one with the SM. Between FM and MM no significant difference was to detect.

  8. Augmented depth perception visualization in 2D/3D image fusion.

    PubMed

    Wang, Jian; Kreiser, Matthias; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal

    2014-12-01

    2D/3D image fusion applications are widely used in endovascular interventions. Complaints from interventionists about existing state-of-art visualization software are usually related to the strong compromise between 2D and 3D visibility or the lack of depth perception. In this paper, we investigate several concepts enabling improvement of current image fusion visualization found in the operating room. First, a contour enhanced visualization is used to circumvent hidden information in the X-ray image. Second, an occlusion and depth color-coding scheme is considered to improve depth perception. To validate our visualization technique both phantom and clinical data are considered. An evaluation is performed in the form of a questionnaire which included 24 participants: ten clinicians and fourteen non-clinicians. Results indicate that the occlusion correction method provides 100% correctness when determining the true position of an aneurysm in X-ray. Further, when integrating an RGB or RB color-depth encoding in the image fusion both perception and intuitiveness are improved.

  9. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  10. ProteinVista: a fast molecular visualization system using Microsoft Direct3D.

    PubMed

    Park, Chan-Yong; Park, Sung-Hee; Park, Soo-Jun; Park, Sun-Hee; Hwang, Chi-Jung

    2008-09-01

    Many tools have been developed to visualize protein and molecular structures. Most high quality protein visualization tools use the OpenGL graphics library as a 3D graphics system. Currently, the performance of recent 3D graphics hardware has rapidly improved. Recent high-performance 3D graphics hardware support Microsoft Direct3D graphics library more than OpenGL and have become very popular in personal computers (PCs). In this paper, a molecular visualization system termed ProteinVista is proposed. ProteinVista is well-designed visualization system using the Microsoft Direct3D graphics library. It provides various visualization styles such as the wireframe, stick, ball and stick, space fill, ribbon, and surface model styles, in addition to display options for 3D visualization. As ProteinVista is optimized for recent 3D graphics hardware platforms and because it uses a geometry instancing technique, its rendering speed is 2.7 times faster compared to other visualization tools.

  11. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  12. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  13. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  14. New techniques in 3D scalar and vector field visualization

    SciTech Connect

    Max, N.; Crawfis, R.; Becker, B.

    1993-05-05

    At Lawrence Livermore National Laboratory (LLNL) we have recently developed several techniques for volume visualization of scalar and vector fields, all of which use back-to-front compositing. The first renders volume density clouds by compositing polyhedral volume cells or their faces. The second is a ``splatting`` scheme which composites textures used to reconstruct the scalar or vector fields. One version calculates the necessary texture values in software, and another takes advantage of hardware texture mapping. The next technique renders contour surface polygons using semi-transparent textures, which adjust appropriately when the surfaces deform in a flow, or change topology. The final one renders the ``flow volume`` of smoke or dye tracer swept out by a fluid flowing through a small generating polygon. All of these techniques are applied to a climate model data set, to visualize cloud density and wind velocity.

  15. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  16. Hand/eye calibration of a robot arm with a 3D visual sensor

    NASA Astrophysics Data System (ADS)

    Kim, Min-Young; Cho, Hyungsuck; Kim, Jae H.

    2001-10-01

    Hand/eye calibration is useful in many industrial applications, for instance, grasping objects or reconstructing 3D scenes. The calibration of robot systems with a visual sensor is essentially the calibration of a robot, a sensor, and hand-to-eye relation. This paper describes a new technique for computing 3D position and orientation of a 3D visual sensor system relative to the end effector of a robot manipulator in an eye-on-hand robot configuration. When the position of feature points on a calibration target in sensor coordinates viewed at each robot movement, and the position of these points in world coordinates and the relative robot movement between two robot motions are known, a homogeneous equation of the form AX equals XB can be derived. To obtain the unique solution of X, it is necessary to make two relative robot arm movements and to form a system of two equations of the form: A1X equals XB1 and A2X equals XB2. In this paper, a closed-form solution of this calibration system is derived, and the constraints for existence of a unique solution are described in detail. Test results obtained through a series of simulation show that this technique is a simple, efficient, and accurate method for hand/eye calibration.

  17. A 3D visualization system for molecular structures

    NASA Technical Reports Server (NTRS)

    Green, Terry J.

    1989-01-01

    The properties of molecules derive in part from their structures. Because of the importance of understanding molecular structures various methodologies, ranging from first principles to empirical technique, were developed for computing the structure of molecules. For large molecules such as polymer model compounds, the structural information is difficult to comprehend by examining tabulated data. Therefore, a molecular graphics display system, called MOLDS, was developed to help interpret the data. MOLDS is a menu-driven program developed to run on the LADC SNS computer systems. This program can read a data file generated by the modeling programs or data can be entered using the keyboard. MOLDS has the following capabilities: draws the 3-D representation of a molecule using stick, ball and ball, or space filled model from Cartesian coordinates, draws different perspective views of the molecule; rotates the molecule on the X, Y, Z axis or about some arbitrary line in space, zooms in on a small area of the molecule in order to obtain a better view of a specific region; and makes hard copy representation of molecules on a graphic printer. In addition, MOLDS can be easily updated and readily adapted to run on most computer systems.

  18. Open source 3D visualization and interaction dedicated to hydrological models

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2014-05-01

    Climate change and surface urbanization strongly modify the hydrological cycle in urban areas, increasing the consequences of extreme events such as floods or draughts. These issues lead to the development of the Multi-Hydro model at the Ecole des Ponts ParisTech (A. Giangola-Murzyn et al., 2012). This fully distributed model allows to compute the hydrological response of urban and peri-urban areas. Unfortunately such models are seldom user friendly. Indeed generating the inputs before launching a new simulation is usually a tricky tasks, and understanding and interpreting the outputs remains specialist tasks not accessible to the wider public. The MH-AssimTool was developed to overcome these issues. To enable an easier and improved understanding of the model outputs, we decided to convert the raw output data (grids file in ascii format) to a 3D display. Some commercial paying models provide a 3D visualization. Because of the cost of their licenses, this kind of tools may not be accessible to the most concerned stakeholders. So, we are developing a new tool based on C++ for the computation, Qt for the graphic user interface, QGIS for the geographical side and OpenGL for the 3D display. All these languages and libraries are open source and multi-platform. We will discuss some preprocessing issues for the data conversion from 2.5D to 3D. Indeed, the GIS data, is considered as a 2.5D (e.i. 2D polygon + one height) and the its transform to 3D display implies a lot of algorithms. For example,to visualize in 3D one building, it is needed to have for each point the coordinates and the elevation according to the topography. Furthermore one have to create new points to represent the walls. Finally the interactions between the model and stakeholders through this new interface and how this helps converting a research tool into a an efficient operational decision tool will be discussed. This ongoing research on the improvement of the visualization methods is supported by the

  19. A 3D Immersive Fault Visualizer and Editor

    NASA Astrophysics Data System (ADS)

    Yikilmaz, M. B.; van Aalsburg, J.; Kreylos, O.; Kellogg, L. H.; Rundle, J. B.

    2007-12-01

    Digital fault models are an important resource for the study of earthquake dynamics, fault-earthquake interactions and seismicity. Once digitized these fault models can be used in Finite Element Model (FEM) programs or earthquake simulations such as Virtual California (VC). However, these models are often difficult to create, requiring a substantial amount of time to generate the fault topology and compute the properties of the individual segments. To aid in the construction of such models we have developed an immersive virtual reality (VR) application to visualize and edit fault models. Our program is designed to run in a CAVE (walk-in VR environment), but also works in a wide range of other environments, including desktop systems and GeoWalls. It is being developed at the UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://www.keckcaves.org). Immersive VR environments are ideal for visualizing and manipulating three- dimensional data sets. Our program allows users to create new models or modify existing ones; for example by repositioning individual fault-segments, by changing the dip angle, or by modifying (or assigning) the value of a property associated with a particular fault segment (i.e. slip rate). With the addition of high resolution Digital Elevation Models (DEM) the user can accurately add new segments to an existing model or create a fault model entirely from scratch. Interactively created or modified models can be written to XML files at any time; from there the data may easily be converted into various formats required by the analysis software or simulation. We believe that the ease of interaction provided by VR technology is ideally suited to the problem of creating and editing digital fault models. Our software provides the user with an intuitive environment for visualizing and editing fault model data. This translates not only into less time spent creating fault models, but also enables the researcher to

  20. Investigating 3d Reconstruction Methods for Small Artifacts

    NASA Astrophysics Data System (ADS)

    Evgenikou, V.; Georgopoulos, A.

    2015-02-01

    Small artifacts have always been a real challenge when it comes to 3D modelling. They usually present severe difficulties for their 3D reconstruction. Lately, the demand for the production of 3D models of small artifacts, especially in the cultural heritage domain, has dramatically increased. As with many cases, there are no specifications and standards for this task. This paper investigates the efficiency of several mainly low cost methods for 3D model production of such small artifacts. Moreover, the material, the color and the surface complexity of these objects id also investigated. Both image based and laser scanning methods have been considered as alternative data acquisition methods. The evaluation has been confined to the 3D meshes, as texture depends on the imaging properties, which are not investigated in this project. The resulting meshes have been compared to each other for their completeness, and accuracy. It is hoped that the outcomes of this investigation will be useful to researchers who are planning to embark into mass production of 3D models of small artifacts.

  1. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  2. Evaluation of passive polarized stereoscopic 3D display for visual & mental fatigues.

    PubMed

    Amin, Hafeez Ullah; Malik, Aamir Saeed; Mumtaz, Wajid; Badruddin, Nasreen; Kamel, Nidal

    2015-01-01

    Visual and mental fatigues induced by active shutter stereoscopic 3D (S3D) display have been reported using event-related brain potentials (ERP). An important question, that is whether such effects (visual & mental fatigues) can be found in passive polarized S3D display, is answered here. Sixty-eight healthy participants are divided into 2D and S3D groups and subjected to an oddball paradigm after being exposed to S3D videos with passive polarized display or 2D display. The age and fluid intelligence ability of the participants are controlled between the groups. ERP results do not show any significant differences between S3D and 2D groups to find the aftereffects of S3D in terms of visual and mental fatigues. Hence, we conclude that passive polarized S3D display technology may not induce visual and/or mental fatigue which may increase the cognitive load and suppress the ERP components. PMID:26738049

  3. Multimodal visualization of 3D enhanced MRI and CT of acoustic schwannoma and related structures

    NASA Astrophysics Data System (ADS)

    Kucharski, Tomasz; Kujawinska, Malgorzata; Niemczyk, Kazimierz; Marchel, Andrzej

    2005-09-01

    According to the necessity of supporting vestibular schwannoma surgery, there is a demand to develop a convenient method of medical data visualization. The process of making choice of optimal operating access way has been uncomfortable for a surgeon so far, because there has been a necessity of analyzing two independent 3D images series (CT -bone tissues visible, MRI - soft tissues visible) in the region of ponto-cerebellar angle tumors. The authors propose a solution that will improve this process. The system used is equipped with stereoscopic helmet mounted display. It allows merged CT and MRI data representing tissues in the region of of ponto-cerebellar angle to be visualized in stereoscopic way. The process of data preparation for visualization includes: -automated segmentation algorithms, -different types of 3D images (CT, MRI) fusion. The authors focused on the development of novel algorithms for segmentation of vestibular schwannoma. It is important and difficult task due to different types of tumors and their inhomogeneous character dependent on growth models. The authors propose algorithms based on histogram spectrum and multimodal character of MRI imaging (T1 and T2 modes). However due to a variety of objects the library of algorithms with specific modifications matching to selected types of images is proposed. The applicability and functionality of the algorithms and library was proved on the series of data delivered by Warsaw Central Medical University Hospital.

  4. Visualization of a newborn's hip joint using 3D ultrasound and automatic image processing

    NASA Astrophysics Data System (ADS)

    Overhoff, Heinrich M.; Lazovic, Djordje; von Jan, Ute

    1999-05-01

    Graf's method is a successful procedure for the diagnostic screening of developmental dysplasia of the hip. In a defined 2-D ultrasound (US) scan, which virtually cuts the hip joint, landmarks are interactively identified to derive congruence indicators. As the indicators do not reflect the spatial joint structure, and the femoral head is not clearly visible in the US scan, here 3-D US is used to gain insight to the hip joint in its spatial form. Hip joints of newborns were free-hand scanned using a conventional ultrasound transducer and a localizer system fixed on the scanhead. To overcome examiner- dependent findings the landmarks were detected by automatic segmentation of the image volume. The landmark image volumes and an automatically determined virtual sphere approximating the femoral head were visualized color-coded on a computer screen. The visualization was found to be intuitive and to simplify the diagnostic substantially. By the visualization of the 3-D relations between acetabulum and femoral head the reliability of diagnostics is improved by finding the entire joint geometry.

  5. 3-D UNSTRUCTURED HEXAHEDRAL-MESH Sn TRANSPORT METHODS

    SciTech Connect

    J. MOREL; J. MCGHEE; ET AL

    2000-11-01

    This is the final report of a three-year, Laboratory-Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). We have developed a method for solving the neutral-particle transport equation on 3-D unstructured hexahedral meshes using a S{sub n} discretization in angle in conjunction with a discontinuous finite-element discretization in space and a multigroup discretization in energy. Previous methods for solving this equation in 3-D have been limited to rectangular meshes. The unstructured-mesh method that we have developed is far more efficient for solving problems with complex 3-D geometric features than rectangular-mesh methods. In spite of having to make several compromises in our spatial discretization technique and our iterative solution technique, our method has been found to be both accurate and efficient for a broad class of problems.

  6. A package for 3-D unstructured grid generation, finite-element flow solution and flow field visualization

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald

    1990-01-01

    A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.

  7. Improving automated 3D reconstruction methods via vision metrology

    NASA Astrophysics Data System (ADS)

    Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart

    2015-05-01

    This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.

  8. PointCloudXplore: a visualization tool for 3D gene expressiondata

    SciTech Connect

    Rubel, Oliver; Weber, Gunther H.; Keranen, Soile V.E.; Fowlkes,Charles C.; Luengo Hendriks, Cristian L.; Simirenko, Lisa; Shah, NameetaY.; Eisen, Michael B.; Biggn, Mark D.; Hagen, Hans; Sudar, Damir J.; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2006-10-01

    The Berkeley Drosophila Transcription Network Project (BDTNP) has developed a suite of methods that support quantitative, computational analysis of three-dimensional (3D) gene expression patterns with cellular resolution in early Drosophila embryos, aiming at a more in-depth understanding of gene regulatory networks. We describe a new tool, called PointCloudXplore (PCX), that supports effective 3D gene expression data exploration. PCX is a visualization tool that uses the established visualization techniques of multiple views, brushing, and linking to support the analysis of high-dimensional datasets that describe many genes' expression. Each of the views in PointCloudXplore shows a different gene expression data property. Brushing is used to select and emphasize data associated with defined subsets of embryo cells within a view. Linking is used to show in additional views the expression data for a group of cells that have first been highlighted as a brush in a single view, allowing further data subset properties to be determined. In PCX, physical views of the data are linked to abstract data displays such as parallel coordinates. Physical views show the spatial relationships between different genes' expression patterns within an embryo. Abstract gene expression data displays on the other hand allow for an analysis of relationships between different genes directly in the gene expression space. We discuss on parallel coordinates as one example abstract data view currently available in PCX. We have developed several extensions to standard parallel coordinates to facilitate brushing and the visualization of 3D gene expression data.

  9. [An integrated segmentation method for 3D ultrasound carotid artery].

    PubMed

    Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie

    2013-07-01

    An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention. PMID:24195385

  10. A novel and stable approach to anatomical structure morphing for enhanced intraoperative 3D visualization

    NASA Astrophysics Data System (ADS)

    Rajamani, Kumar T.; Gonzalez Ballester, Miguel A.; Nolte, Lutz-Peter; Styner, Martin

    2005-04-01

    The use of three dimensional models in planning and navigating computer assisted surgeries is now well established. These models provide intuitive visualization to the surgeons contributing to significantly better surgical outcomes. Models obtained from specifically acquired CT scans have the disadvantage that they induce high radiation dose to the patient. In this paper we propose a novel and stable method to construct a patient-specific model that provides an appropriate intra-operative 3D visualization without the need for a pre or intra-operative imaging. Patient specific data consists of digitized landmarks and surface points that are obtained intra-operatively. The 3D model is reconstructed by fitting a statistical deformable model to the minimal sparse digitized data. The statistical model is constructed using Principal Component Analysis from training objects. Our morphing scheme efficiently and accurately computes a Mahalanobis distance weighted least square fit of the deformable model to the 3D data model by solving a linear equation system. Relaxing the Mahalanobis distance term as additional points are incorporated enables our method to handle small and large sets of digitized points efficiently. Our novel incorporation of M-estimator based weighting of the digitized points enables us to effectively reject outliers and compute stable models. Normalization of the input model data and the digitized points makes our method size invariant and hence applicable directly to any anatomical shape. The method also allows incorporation of non-spatial data such as patient height and weight. The predominant applications are hip and knee surgeries.

  11. 3D colour visualization of label images using volume rendering techniques.

    PubMed

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions.

  12. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  13. Sensor Spatial Distortion, Visual Latency, and Update Rate Effects on 3D Tracking in Virtual Environments

    NASA Technical Reports Server (NTRS)

    Ellis, S. R.; Adelstein, B. D.; Baumeler, S.; Jense, G. J.; Jacoby, R. H.; Trejo, Leonard (Technical Monitor)

    1998-01-01

    Several common defects that we have sought to minimize in immersing virtual environments are: static sensor spatial distortion, visual latency, and low update rates. Human performance within our environments during large amplitude 3D tracking was assessed by objective and subjective methods in the presence and absence of these defects. Results show that 1) removal of our relatively small spatial sensor distortion had minor effects on the tracking activity, 2) an Adapted Cooper-Harper controllability scale proved the most sensitive subjective indicator of the degradation of dynamic fidelity caused by increasing latency and decreasing frame rates, and 3) performance, as measured by normalized RMS tracking error or subjective impressions, was more markedly influenced by changing visual latency than by update rate.

  14. 3D Vision on Mars: Stereo processing and visualizations for NASA and ESA rover missions

    NASA Astrophysics Data System (ADS)

    Huber, Ben

    2016-07-01

    Three dimensional (3D) vision processing is an essential component of planetary rover mission planning and scientific data analysis. Standard ground vision processing products are digital terrain maps, panoramas, and virtual views of the environment. Such processing is currently developed for the PanCam instrument of ESA's ExoMars Rover mission by the PanCam 3D Vision Team under JOANNEUM RESEARCH coordination. Camera calibration, quality estimation of the expected results and the interfaces to other mission elements such as operations planning, rover navigation system and global Mars mapping are a specific focus of the current work. The main goals of the 3D Vision team in this context are: instrument design support & calibration processing: Development of 3D vision functionality Visualization: development of a 3D visualization tool for scientific data analysis. 3D reconstructions from stereo image data during the mission Support for 3D scientific exploitation to characterize the overall landscape geomorphology, processes, and the nature of the geologic record using the reconstructed 3D models. The developed processing framework PRoViP establishes an extensible framework for 3D vision processing in planetary robotic missions. Examples of processing products and capabilities are: Digital Terrain Models, Ortho images, 3D meshes, occlusion, solar illumination-, slope-, roughness-, and hazard-maps. Another important processing capability is the fusion of rover and orbiter based images with the support of multiple missions and sensors (e.g. MSL Mastcam stereo processing). For 3D visualization a tool called PRo3D has been developed to analyze and directly interpret digital outcrop models. Stereo image products derived from Mars rover data can be rendered in PRo3D, enabling the user to zoom, rotate and translate the generated 3D outcrop models. Interpretations can be digitized directly onto the 3D surface, and simple measurements of the outcrop and sedimentary features

  15. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    NASA Astrophysics Data System (ADS)

    Babu, Sabarish; Liao, Pao-Chuan; Shin, Min C.; Tsap, Leonid V.

    2006-12-01

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  16. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    SciTech Connect

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  17. A method to fabricate disconnected silver nanostructures in 3D.

    PubMed

    Vora, Kevin; Kang, SeungYeon; Mazur, Eric

    2012-01-01

    The standard nanofabrication toolkit includes techniques primarily aimed at creating 2D patterns in dielectric media. Creating metal patterns on a submicron scale requires a combination of nanofabrication tools and several material processing steps. For example, steps to create planar metal structures using ultraviolet photolithography and electron-beam lithography can include sample exposure, sample development, metal deposition, and metal liftoff. To create 3D metal structures, the sequence is repeated multiple times. The complexity and difficulty of stacking and aligning multiple layers limits practical implementations of 3D metal structuring using standard nanofabrication tools. Femtosecond-laser direct-writing has emerged as a pre-eminent technique for 3D nanofabrication.(1,2) Femtosecond lasers are frequently used to create 3D patterns in polymers and glasses.(3-7) However, 3D metal direct-writing remains a challenge. Here, we describe a method to fabricate silver nanostructures embedded inside a polymer matrix using a femtosecond laser centered at 800 nm. The method enables the fabrication of patterns not feasible using other techniques, such as 3D arrays of disconnected silver voxels.(8) Disconnected 3D metal patterns are useful for metamaterials where unit cells are not in contact with each other,(9) such as coupled metal dot(10,11)or coupled metal rod(12,13) resonators. Potential applications include negative index metamaterials, invisibility cloaks, and perfect lenses. In femtosecond-laser direct-writing, the laser wavelength is chosen such that photons are not linearly absorbed in the target medium. When the laser pulse duration is compressed to the femtosecond time scale and the radiation is tightly focused inside the target, the extremely high intensity induces nonlinear absorption. Multiple photons are absorbed simultaneously to cause electronic transitions that lead to material modification within the focused region. Using this approach, one can

  18. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  19. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  20. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  1. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  2. Visualization and 3D Reconstruction of Flame Cells of Taenia solium (Cestoda)

    PubMed Central

    Valverde-Islas, Laura E.; Arrangoiz, Esteban; Vega, Elio; Robert, Lilia; Villanueva, Rafael; Reynoso-Ducoing, Olivia; Willms, Kaethe; Zepeda-Rodríguez, Armando; Fortoul, Teresa I.; Ambrosio, Javier R.

    2011-01-01

    Background Flame cells are the terminal cells of protonephridial systems, which are part of the excretory systems of invertebrates. Although the knowledge of their biological role is incomplete, there is a consensus that these cells perform excretion/secretion activities. It has been suggested that the flame cells participate in the maintenance of the osmotic environment that the cestodes require to live inside their hosts. In live Platyhelminthes, by light microscopy, the cells appear beating their flames rapidly and, at the ultrastructural, the cells have a large body enclosing a tuft of cilia. Few studies have been performed to define the localization of the cytoskeletal proteins of these cells, and it is unclear how these proteins are involved in cell function. Methodology/Principal Findings Parasites of two different developmental stages of T. solium were used: cysticerci recovered from naturally infected pigs and intestinal adults obtained from immunosuppressed and experimentally infected golden hamsters. Hamsters were fed viable cysticerci to recover adult parasites after one month of infection. In the present studies focusing on flame cells of cysticerci tissues was performed. Using several methods such as video, confocal and electron microscopy, in addition to computational analysis for reconstruction and modeling, we have provided a 3D visual rendition of the cytoskeletal architecture of Taenia solium flame cells. Conclusions/Significance We consider that visual representations of cells open a new way for understanding the role of these cells in the excretory systems of Platyhelminths. After reconstruction, the observation of high resolution 3D images allowed for virtual observation of the interior composition of cells. A combination of microscopic images, computational reconstructions and 3D modeling of cells appears to be useful for inferring the cellular dynamics of the flame cell cytoskeleton. PMID:21412407

  3. Thoracic Cavity Definition for 3D PET/CT Analysis and Visualization

    PubMed Central

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W.; Higgins, William E.

    2015-01-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical detail on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage = 99.2% and leakage = 0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. PMID:25957746

  4. Comparing and visualizing titanium implant integration in rat bone using 2D and 3D techniques.

    PubMed

    Arvidsson, Anna; Sarve, Hamid; Johansson, Carina B

    2015-01-01

    The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based μCT (SRμCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p < 0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies.

  5. Comparing and visualizing titanium implant integration in rat bone using 2D and 3D techniques.

    PubMed

    Arvidsson, Anna; Sarve, Hamid; Johansson, Carina B

    2015-01-01

    The aim was to compare the osseointegration of grit-blasted implants with and without a hydrogen fluoride treatment in rat tibia and femur, and to visualize bone formation using state-of-the-art 3D visualization techniques. Grit-blasted implants were inserted in femur and tibia of 10 Sprague-Dawley rats (4 implants/rat). Four weeks after insertion, bone implant samples were retrieved. Selected samples were imaged in 3D using Synchrotron Radiation-based μCT (SRμCT). The 3D data was quantified and visualized using two novel visualization techniques, thread fly-through and 2D unfolding. All samples were processed to cut and ground sections and 2D histomorphometrical comparisons of bone implant contact (BIC), bone area (BA), and mirror image area (MI) were performed. BA values were statistically significantly higher for test implants than controls (p < 0.05), but BIC and MI data did not differ significantly. Thus, the results partly indicate improved bone formation at blasted and hydrogen fluoride treated implants, compared to blasted implants. The 3D analysis was a valuable complement to 2D analysis, facilitating improved visualization. However, further studies are required to evaluate aspects of 3D quantitative techniques, with relation to light microscopy that traditionally is used for osseointegration studies. PMID:24711247

  6. Enhanced sensory re-learning after nerve repair using 3D audio-visual signals and kinaesthesia--preliminary results.

    PubMed

    Schmidhammer, R; Hausner, T; Kröpfl, A; Huber, W; Hopf, R; Leixnering, M; Herz, H; Redl, H

    2007-01-01

    Sensory re-learning methods and basics on cortical reorganization after peripheral nerve lesion are well documented. The aim of enhanced sensory re-learning using 3D audio-visual signals and kinaesthetic training is the augmentation of cognitive memory (visual and acoustic sensory memory) and cognitive function for the improvement of cerebral plasticity processes and starts as soon as possible after nerve repair. Preliminary results are shown.

  7. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative

  8. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  9. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  10. Three-Dimensional Phylogeny Explorer: Distinguishing paralogs, lateral transfer, and violation of "molecular clock" assumption with 3D visualization

    PubMed Central

    Kim, Namshin; Lee, Christopher

    2007-01-01

    Background Construction and interpretation of phylogenetic trees has been a major research topic for understanding the evolution of genes. Increases in sequence data and complexity are creating a need for more powerful and insightful tree visualization tools. Results We have developed 3D Phylogeny Explorer (3DPE), a novel phylogeny tree viewer that maps trees onto three spatial axes (species on the X-axis; paralogs on Z; evolutionary distance on Y), enabling one to distinguish at a glance evolutionary features such as speciation; gene duplication and paralog evolution; lateral gene transfer; and violation of the "molecular clock" assumption. Users can input any tree on the online 3DPE, then rotate, scroll, rescale, and explore it interactively as "live" 3D views. All objects in 3DPE are clickable to display subtrees, connectivity path highlighting, sequence alignments, and gene summary views, and etc. To illustrate the value of this visualization approach for microbial genomes, we also generated 3D phylogeny analyses for all clusters from the public COG database. We constructed tree views using well-established methods and graph algorithms. We used Scientific Python to generate VRML2 3D views viewable in any web browser. Conclusion 3DPE provides a novel phylogenetic tree projection method into 3D space and its web-based implementation with live 3D features for reconstruction of phylogenetic trees of COG database. PMID:17584922

  11. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  12. A novel 3D wavelet based filter for visualizing features in noisy biological data

    SciTech Connect

    Moss, W C; Haase, S; Lyle, J M; Agard, D A; Sedat, J W

    2005-01-05

    We have developed a 3D wavelet-based filter for visualizing structural features in volumetric data. The only variable parameter is a characteristic linear size of the feature of interest. The filtered output contains only those regions that are correlated with the characteristic size, thus denoising the image. We demonstrate the use of the filter by applying it to 3D data from a variety of electron microscopy samples including low contrast vitreous ice cryogenic preparations, as well as 3D optical microscopy specimens.

  13. SAMA: A Method for 3D Morphological Analysis.

    PubMed

    Paulose, Tessie; Montévil, Maël; Speroni, Lucia; Cerruti, Florent; Sonnenschein, Carlos; Soto, Ana M

    2016-01-01

    Three-dimensional (3D) culture models are critical tools for understanding tissue morphogenesis. A key requirement for their analysis is the ability to reconstruct the tissue into computational models that allow quantitative evaluation of the formed structures. Here, we present Software for Automated Morphological Analysis (SAMA), a method by which epithelial structures grown in 3D cultures can be imaged, reconstructed and analyzed with minimum human intervention. SAMA allows quantitative analysis of key features of epithelial morphogenesis such as ductal elongation, branching and lumen formation that distinguish different hormonal treatments. SAMA is a user-friendly set of customized macros operated via FIJI (http://fiji.sc/Fiji), an open-source image analysis platform in combination with a set of functions in R (http://www.r-project.org/), an open-source program for statistical analysis. SAMA enables a rapid, exhaustive and quantitative 3D analysis of the shape of a population of structures in a 3D image. SAMA is cross-platform, licensed under the GPLv3 and available at http://montevil.theobio.org/content/sama. PMID:27035711

  14. SAMA: A Method for 3D Morphological Analysis

    PubMed Central

    Cerruti, Florent; Sonnenschein, Carlos; Soto, Ana M.

    2016-01-01

    Three-dimensional (3D) culture models are critical tools for understanding tissue morphogenesis. A key requirement for their analysis is the ability to reconstruct the tissue into computational models that allow quantitative evaluation of the formed structures. Here, we present Software for Automated Morphological Analysis (SAMA), a method by which epithelial structures grown in 3D cultures can be imaged, reconstructed and analyzed with minimum human intervention. SAMA allows quantitative analysis of key features of epithelial morphogenesis such as ductal elongation, branching and lumen formation that distinguish different hormonal treatments. SAMA is a user-friendly set of customized macros operated via FIJI (http://fiji.sc/Fiji), an open-source image analysis platform in combination with a set of functions in R (http://www.r-project.org/), an open-source program for statistical analysis. SAMA enables a rapid, exhaustive and quantitative 3D analysis of the shape of a population of structures in a 3D image. SAMA is cross-platform, licensed under the GPLv3 and available at http://montevil.theobio.org/content/sama. PMID:27035711

  15. V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets.

    PubMed

    Peng, Hanchuan; Ruan, Zongcai; Long, Fuhui; Simpson, Julie H; Myers, Eugene W

    2010-04-01

    The V3D system provides three-dimensional (3D) visualization of gigabyte-sized microscopy image stacks in real time on current laptops and desktops. V3D streamlines the online analysis, measurement and proofreading of complicated image patterns by combining ergonomic functions for selecting a location in an image directly in 3D space and for displaying biological measurements, such as from fluorescent probes, using the overlaid surface objects. V3D runs on all major computer platforms and can be enhanced by software plug-ins to address specific biological problems. To demonstrate this extensibility, we built a V3D-based application, V3D-Neuron, to reconstruct complex 3D neuronal structures from high-resolution brain images. V3D-Neuron can precisely digitize the morphology of a single neuron in a fruitfly brain in minutes, with about a 17-fold improvement in reliability and tenfold savings in time compared with other neuron reconstruction tools. Using V3D-Neuron, we demonstrate the feasibility of building a 3D digital atlas of neurite tracts in the fruitfly brain. PMID:20231818

  16. Application of 3D reflection seismic methods to mineral exploration

    NASA Astrophysics Data System (ADS)

    Urosevic, Milovan

    2013-04-01

    Seismic exploration for mineral deposits is often tested by excessively complex structures, regolith heterogeneity, intrinsically low signal to noise ratio, ground relief and accessibility. In brown fields, where the majority of the seismic surveys have been conducted, existing infrastructure, old pits and tailings, heavy machinery in operation, mine drainage and other mine related activities are further challenging the application of seismic methods and furthermore increasing its cost. It is therefore not surprising that the mining industry has been reluctant to use seismic methods, particularly 3D for mineral exploration, primarily due to the high cost, but also because of variable performance, and in some cases ambiguous interpretation results. However, shallow mineral reserves are becoming depleted and exploration is moving towards deeper targets. Seismic methods will be more important for deeper investigations and may become the primary exploration tool in the near future. The big issue is if we have an appropriate seismic "strategy" for exploration of deep, complex mineral reserves. From the existing case histories worldwide we know that massive ore deposits (VMS, VHMS) constitute the best case scenario for the application of 3D seismic. Direct targeting of massive ore bodies from seismic has been documented in several case histories. Sediment hosted deposits could, in some cases, can also produce a detectable seismic signature. Other deposit types such as IOCG and skarn are much more challenging for the application of seismic methods. The complexity of these deposits requires new thinking. Several 3D surveys acquired over different deposit types will be presented and discussed.

  17. Effects of CT image segmentation methods on the accuracy of long bone 3D reconstructions.

    PubMed

    Rathnayaka, Kanchana; Sahama, Tony; Schuetz, Michael A; Schmutz, Beat

    2011-03-01

    An accurate and accessible image segmentation method is in high demand for generating 3D bone models from CT scan data, as such models are required in many areas of medical research. Even though numerous sophisticated segmentation methods have been published over the years, most of them are not readily available to the general research community. Therefore, this study aimed to quantify the accuracy of three popular image segmentation methods, two implementations of intensity thresholding and Canny edge detection, for generating 3D models of long bones. In order to reduce user dependent errors associated with visually selecting a threshold value, we present a new approach of selecting an appropriate threshold value based on the Canny filter. A mechanical contact scanner in conjunction with a microCT scanner was utilised to generate the reference models for validating the 3D bone models generated from CT data of five intact ovine hind limbs. When the overall accuracy of the bone model is considered, the three investigated segmentation methods generated comparable results with mean errors in the range of 0.18-0.24 mm. However, for the bone diaphysis, Canny edge detection and Canny filter based thresholding generated 3D models with a significantly higher accuracy compared to those generated through visually selected thresholds. This study demonstrates that 3D models with sub-voxel accuracy can be generated utilising relatively simple segmentation methods that are available to the general research community.

  18. Adaptive enhancement and visualization techniques for 3D THz images of breast cancer tumors

    NASA Astrophysics Data System (ADS)

    Wu, Yuhao; Bowman, Tyler; Gauch, John; El-Shenawee, Magda

    2016-03-01

    This paper evaluates image enhancement and visualization techniques for pulsed terahertz (THz) images of tissue samples. Specifically, our research objective is to effectively differentiate between heterogeneous regions of breast tissues that contain tumors diagnosed as triple negative infiltrating ductal carcinoma (IDC). Tissue slices and blocks of varying thicknesses were prepared and scanned using our lab's THz pulsed imaging system. One of the challenges we have encountered in visualizing the obtained images and differentiating between healthy and cancerous regions of the tissues is that most THz images have a low level of details and narrow contrast, making it difficult to accurately identify and visualize the margins around the IDC. To overcome this problem, we have applied and evaluated a number of image processing techniques to the scanned 3D THz images. In particular, we employed various spatial filtering and intensity transformation techniques to emphasize the small details in the images and adjust the image contrast. For each of these methods, we investigated how varying filter sizes and parameters affect the amount of enhancement applied to the images. Our experimentation shows that several image processing techniques are effective in producing THz images of breast tissue samples that contain distinguishable details, making further segmentation of the different image regions promising.

  19. Distortion-free wide-angle 3D imaging and visualization using off-axially distributed image sensing.

    PubMed

    Zhang, Miao; Piao, Yongri; Kim, Nam-Woo; Kim, Eun-Soo

    2014-07-15

    We propose a new off-axially distributed image sensing (ODIS) using a wide-angle lens for reconstructing distortion-free wide-angle slice images computationally. In the proposed system, the wide-angle image sensor captures a wide-angle 3D scene, and thus the collected information of the 3D objects is severely distorted. To correct this distortion, we introduce a new correction process involving a wide-angle lens to the computational reconstruction in ODIS. This enables us to reconstruct distortion-free, wide-angle slice images for visualization of 3D objects. Experimental results are carried out to verify the proposed method. To the best of our knowledge, this is the first time the use of a wide-angle lens in a multiple-perspective 3D imaging system is described.

  20. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  1. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  2. A lightweight tangible 3D interface for interactive visualization of thin fiber structures.

    PubMed

    Jackson, Bret; Lau, Tung Yuen; Schroeder, David; Toussaint, Kimani C; Keefe, Daniel F

    2013-12-01

    We present a prop-based, tangible interface for 3D interactive visualization of thin fiber structures. These data are commonly found in current bioimaging datasets, for example second-harmonic generation microscopy of collagen fibers in tissue. Our approach uses commodity visualization technologies such as a depth sensing camera and low-cost 3D display. Unlike most current uses of these emerging technologies in the games and graphics communities, we employ the depth sensing camera to create a fish-tank stereoscopic virtual reality system at the scientist's desk that supports tracking of small-scale gestures with objects already found in the work space. We apply the new interface to the problem of interactive exploratory visualization of three-dimensional thin fiber data. A critical task for the visual analysis of these data is understanding patterns in fiber orientation throughout a volume.The interface enables a new, fluid style of data exploration and fiber orientation analysis by using props to provide needed passive-haptic feedback, making 3D interactions with these fiber structures more controlled. We also contribute a low-level algorithm for extracting fiber centerlines from volumetric imaging. The system was designed and evaluated with two biophotonic experts who currently use it in their lab. As compared to typical practice within their field, the new visualization system provides a more effective way to examine and understand the 3D bioimaging datasets they collect.

  3. Method for modeling post-mortem biometric 3D fingerprints

    NASA Astrophysics Data System (ADS)

    Rajeev, Srijith; Shreyas, Kamath K. M.; Agaian, Sos S.

    2016-05-01

    Despite the advancements of fingerprint recognition in 2-D and 3-D domain, authenticating deformed/post-mortem fingerprints continue to be an important challenge. Prior cleansing and reconditioning of the deceased finger is required before acquisition of the fingerprint. The victim's finger needs to be precisely and carefully operated by a medium to record the fingerprint impression. This process may damage the structure of the finger, which subsequently leads to higher false rejection rates. This paper proposes a non-invasive method to perform 3-D deformed/post-mortem finger modeling, which produces a 2-D rolled equivalent fingerprint for automated verification. The presented novel modeling method involves masking, filtering, and unrolling. Computer simulations were conducted on finger models with different depth variations obtained from Flashscan3D LLC. Results illustrate that the modeling scheme provides a viable 2-D fingerprint of deformed models for automated verification. The quality and adaptability of the obtained unrolled 2-D fingerprints were analyzed using NIST fingerprint software. Eventually, the presented method could be extended to other biometric traits such as palm, foot, tongue etc. for security and administrative applications.

  4. Discrete Method of Images for 3D Radio Propagation Modeling

    NASA Astrophysics Data System (ADS)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  5. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    NASA Astrophysics Data System (ADS)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  6. Parallel 3D Mortar Element Method for Adaptive Nonconforming Meshes

    NASA Technical Reports Server (NTRS)

    Feng, Huiyu; Mavriplis, Catherine; VanderWijngaart, Rob; Biswas, Rupak

    2004-01-01

    High order methods are frequently used in computational simulation for their high accuracy. An efficient way to avoid unnecessary computation in smooth regions of the solution is to use adaptive meshes which employ fine grids only in areas where they are needed. Nonconforming spectral elements allow the grid to be flexibly adjusted to satisfy the computational accuracy requirements. The method is suitable for computational simulations of unsteady problems with very disparate length scales or unsteady moving features, such as heat transfer, fluid dynamics or flame combustion. In this work, we select the Mark Element Method (MEM) to handle the non-conforming interfaces between elements. A new technique is introduced to efficiently implement MEM in 3-D nonconforming meshes. By introducing an "intermediate mortar", the proposed method decomposes the projection between 3-D elements and mortars into two steps. In each step, projection matrices derived in 2-D are used. The two-step method avoids explicitly forming/deriving large projection matrices for 3-D meshes, and also helps to simplify the implementation. This new technique can be used for both h- and p-type adaptation. This method is applied to an unsteady 3-D moving heat source problem. With our new MEM implementation, mesh adaptation is able to efficiently refine the grid near the heat source and coarsen the grid once the heat source passes. The savings in computational work resulting from the dynamic mesh adaptation is demonstrated by the reduction of the the number of elements used and CPU time spent. MEM and mesh adaptation, respectively, bring irregularity and dynamics to the computer memory access pattern. Hence, they provide a good way to gauge the performance of computer systems when running scientific applications whose memory access patterns are irregular and unpredictable. We select a 3-D moving heat source problem as the Unstructured Adaptive (UA) grid benchmark, a new component of the NAS Parallel

  7. Reconstruction and Visualization of Coordinated 3D Cell Migration Based on Optical Flow.

    PubMed

    Kappe, Christopher P; Schütz, Lucas; Gunther, Stefan; Hufnagel, Lars; Lemke, Steffen; Leitte, Heike

    2016-01-01

    Animal development is marked by the repeated reorganization of cells and cell populations, which ultimately determine form and shape of the growing organism. One of the central questions in developmental biology is to understand precisely how cells reorganize, as well as how and to what extent this reorganization is coordinated. While modern microscopes can record video data for every cell during animal development in 3D+t, analyzing these videos remains a major challenge: reconstruction of comprehensive cell tracks turned out to be very demanding especially with decreasing data quality and increasing cell densities. In this paper, we present an analysis pipeline for coordinated cellular motions in developing embryos based on the optical flow of a series of 3D images. We use numerical integration to reconstruct cellular long-term motions in the optical flow of the video, we take care of data validation, and we derive a LIC-based, dense flow visualization for the resulting pathlines. This approach allows us to handle low video quality such as noisy data or poorly separated cells, and it allows the biologists to get a comprehensive understanding of their data by capturing dynamic growth processes in stills. We validate our methods using three videos of growing fruit fly embryos.

  8. PointCloudExplore 2: Visual exploration of 3D gene expression

    SciTech Connect

    International Research Training Group Visualization of Large and Unstructured Data Sets, University of Kaiserslautern, Germany; Institute for Data Analysis and Visualization, University of California, Davis, CA; Computational Research Division, Lawrence Berkeley National Laboratory , Berkeley, CA; Genomics Division, LBNL; Computer Science Department, University of California, Irvine, CA; Computer Science Division,University of California, Berkeley, CA; Life Sciences Division, LBNL; Department of Molecular and Cellular Biology and the Center for Integrative Genomics, University of California, Berkeley, CA; Ruebel, Oliver; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Keranen, Soile V.E.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; DePace, Angela H.; Simirenko, L.; Eisen, Michael B.; Biggin, Mark D.; Hagen, Hand; Malik, Jitendra; Knowles, David W.; Hamann, Bernd

    2008-03-31

    To better understand how developmental regulatory networks are defined inthe genome sequence, the Berkeley Drosophila Transcription Network Project (BDNTP)has developed a suite of methods to describe 3D gene expression data, i.e.,the output of the network at cellular resolution for multiple time points. To allow researchersto explore these novel data sets we have developed PointCloudXplore (PCX).In PCX we have linked physical and information visualization views via the concept ofbrushing (cell selection). For each view dedicated operations for performing selectionof cells are available. In PCX, all cell selections are stored in a central managementsystem. Cells selected in one view can in this way be highlighted in any view allowingfurther cell subset properties to be determined. Complex cell queries can be definedby combining different cell selections using logical operations such as AND, OR, andNOT. Here we are going to provide an overview of PointCloudXplore 2 (PCX2), thelatest publicly available version of PCX. PCX2 has shown to be an effective tool forvisual exploration of 3D gene expression data. We discuss (i) all views available inPCX2, (ii) different strategies to perform cell selection, (iii) the basic architecture ofPCX2., and (iv) illustrate the usefulness of PCX2 using selected examples.

  9. Evaluation of neuroanatomical training using a 3D visual reality model.

    PubMed

    Brewer, Danielle N; Wilson, Timothy D; Eagleson, Roy; de Ribaupierre, Sandrine

    2012-01-01

    As one of the more difficult components of any curricula, neuroanatomy poses many challenges to students - not only because of the numerous discrete structures, but also due to the complicated spatial relations between them, which must be learned. Traditional anatomical education uses 2D images with a focus on dissection. This approach tends to underestimate the cognitive leaps required between textbook, lecture, and dissection cases. With reduced anatomical teaching time available, and varying student spatial abilities, new techniques are needed for training. The goal of this study is to assess the improvement of trainee understanding of 3D brain anatomy, orientation, visualization, and navigation through the use of digital training regimes in comparison with current methods. Two subsets of health science and medical students were tested individually after being given a group lecture and either a pre- or post-dissection digital lab. Results suggest that exposure to a 3D digital lab may improve knowledge acquisition and understanding by the students, particularly for first time learners. PMID:22356963

  10. Intraoperative 3D stereo visualization for image-guided cardiac ablation

    NASA Astrophysics Data System (ADS)

    Azizian, Mahdi; Patel, Rajni

    2011-03-01

    There are commercial products which provide 3D rendered volumes, reconstructed from electro-anatomical mapping and/or pre-operative CT/MR images of a patient's heart with tools for highlighting target locations for cardiac ablation applications. However, it is not possible to update the three-dimensional (3D) volume intraoperatively to provide the interventional cardiologist with more up-to-date feedback at each instant of time. In this paper, we describe the system we have developed for real-time three-dimensional stereo visualization for cardiac ablation. A 4D ultrasound probe is used to acquire and update a 3D image volume. A magnetic tracking device is used to track the distal part of the ablation catheter in real time and a master-slave robot-assisted system is developed for actuation of a steerable catheter. Three-dimensional ultrasound image volumes go through some processing to make the heart tissue and the catheter more visible. The rendered volume is shown in a virtual environment. The catheter can also be added as a virtual tool to this environment to achieve a higher update rate on the catheter's position. The ultrasound probe is also equipped with an EM tracker which is used for online registration of the ultrasound images and the catheter tracking data. The whole augmented reality scene can be shown stereoscopically to enhance depth perception for the user. We have used transthoracic echocardiography (TTE) instead of the conventional transoesophageal (TEE) or intracardiac (ICE) echocardiogram. A beating heart model has been used to perform the experiments. This method can be used both for diagnostic and therapeutic applications as well as training interventional cardiologists.

  11. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  12. Wavefront scanning method for minimum traveltime calculations in 3-D

    SciTech Connect

    Meng, F.; Liu, H.; Li, Y.

    1994-12-31

    This paper proposes an efficient way to calculate the shortest travel-time and its correspondent ray-path in three dimension, by using point secondary approximation to depict the wavefront and propagate the travel-time computation along recursively expanding and contracting cubic boxes. Due to its following advantages: (1) the computation order is O(N), where N is the total number of discrete secondary nodes; (2) the memory occupation is relatively small; (3) the algorithm is robust even for high velocity contrast; (4) the minimum travel-time and raypath are computed accurately, this 3-D wavefront scanning raytracing method promises to be real tool for 3-D seismic prestack migration, velocity analysis as well as forward waveform modeling by Maslov asymptotic ray theory.

  13. 3D flow visualization and tomographic particle image velocimetry for vortex breakdown over a non-slender delta wing

    NASA Astrophysics Data System (ADS)

    Wang, ChengYue; Gao, Qi; Wei, RunJie; Li, Tian; Wang, JinJun

    2016-06-01

    Volumetric measurement for the leading-edge vortex (LEV) breakdown of a delta wing has been conducted by three-dimensional (3D) flow visualization and tomographic particle image velocimetry (TPIV). The 3D flow visualization is employed to show the vortex structures, which was recorded by four cameras with high resolution. 3D dye streaklines of the visualization are reconstructed using a similar way of particle reconstruction in TPIV. Tomographic PIV is carried out at the same time using same cameras with the dye visualization. Q criterion is employed to identify the LEV. Results of tomographic PIV agree well with the reconstructed 3D dye streaklines, which proves the validity of the measurements. The time-averaged flow field based on TPIV is shown and described by sections of velocity and streamwise vorticity. Combining the two measurement methods sheds light on the complex structures of both bubble type and spiral type of breakdown. The breakdown position is recognized by investigating both the streaklines and TPIV velocity fields. Proper orthogonal decomposition is applied to extract a pair of conjugated helical instability modes from TPIV data. Therefore, the dominant frequency of the instability modes is obtained from the corresponding POD coefficients of the modes based on wavelet transform analysis.

  14. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  15. Trans3D: a free tool for dynamical visualization of EEG activity transmission in the brain.

    PubMed

    Blinowski, Grzegorz; Kamiński, Maciej; Wawer, Dariusz

    2014-08-01

    The problem of functional connectivity in the brain is in the focus of attention nowadays, since it is crucial for understanding information processing in the brain. A large repertoire of measures of connectivity have been devised, some of them being capable of estimating time-varying directed connectivity. Hence, there is a need for a dedicated software tool for visualizing the propagation of electrical activity in the brain. To this aim, the Trans3D application was developed. It is an open access tool based on widely available libraries and supporting both Windows XP/Vista/7(™), Linux and Mac environments. Trans3D can create animations of activity propagation between electrodes/sensors, which can be placed by the user on the scalp/cortex of a 3D model of the head. Various interactive graphic functions for manipulating and visualizing components of the 3D model and input data are available. An application of the Trans3D tool has helped to elucidate the dynamics of the phenomena of information processing in motor and cognitive tasks, which otherwise would have been very difficult to observe. Trans3D is available at: http://www.eeg.pl/.

  16. Procession: using intelligent 3D information visualization to support client understanding during construction projects

    NASA Astrophysics Data System (ADS)

    North, Steve

    2000-02-01

    The latest results in the development of the software tool 'Procession' is presented. The research underlying Procession delivers a conceptual 3D framework for the interpretation of non-physical construction industry processes. Procession is the implementation of the proposed 3D framework, as an information visualization software tool. The conceptual transformation of construction clients' informational needs into 3D visual structures is documented. Also discussed is the development of an 'intelligent' software process to calculate the relevance of individual project elements. This is used to determine the representation of project elements within a 3D surface. Construction is not short of technologies for visualizing physical building models. However, it would seem that little or no consideration has been given to improving the intelligibility of non-physical construction processes. This type of information is usually known as Project Planning data and is concerned with the individual tasks that make up construction projects. While, there are software applications that allow access to this data for the professional members of the project team, clients are currently without a suitable tool. Procession's data surface is an abstract representation of three selected project dimensions. Its 3D progress reports provide construction clients with an 'at-a-glance' indication of project 'health'.

  17. Incremental learning of 3D-DCT compact representations for robust visual tracking.

    PubMed

    Li, Xi; Dick, Anthony; Shen, Chunhua; van den Hengel, Anton; Wang, Hanzi

    2013-04-01

    Visual tracking usually requires an object appearance model that is robust to changing illumination, pose, and other factors encountered in video. Many recent trackers utilize appearance samples in previous frames to form the bases upon which the object appearance model is built. This approach has the following limitations: 1) The bases are data driven, so they can be easily corrupted, and 2) it is difficult to robustly update the bases in challenging situations. In this paper, we construct an appearance model using the 3D discrete cosine transform (3D-DCT). The 3D-DCT is based on a set of cosine basis functions which are determined by the dimensions of the 3D signal and thus independent of the input video data. In addition, the 3D-DCT can generate a compact energy spectrum whose high-frequency coefficients are sparse if the appearance samples are similar. By discarding these high-frequency coefficients, we simultaneously obtain a compact 3D-DCT-based object representation and a signal reconstruction-based similarity measure (reflecting the information loss from signal reconstruction). To efficiently update the object representation, we propose an incremental 3D-DCT algorithm which decomposes the 3D-DCT into successive operations of the 2D discrete cosine transform (2D-DCT) and 1D discrete cosine transform (1D-DCT) on the input video data. As a result, the incremental 3D-DCT algorithm only needs to compute the 2D-DCT for newly added frames as well as the 1D-DCT along the third dimension, which significantly reduces the computational complexity. Based on this incremental 3D-DCT algorithm, we design a discriminative criterion to evaluate the likelihood of a test sample belonging to the foreground object. We then embed the discriminative criterion into a particle filtering framework for object state inference over time. Experimental results demonstrate the effectiveness and robustness of the proposed tracker.

  18. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  19. Research and implementation of visualization techniques for 3D explosion fields

    NASA Astrophysics Data System (ADS)

    Ning, Jianguo; Xu, Xiangzhao; Ma, Tianbao; Yu, Wen

    2015-12-01

    The visualization of scalar data in 3D explosion fields was devised to solve the problems of the complex physical and the huge data in numerical simulation of explosion mechanics problems. For enhancing the explosion effects and reducing the impacts of image analysis, the adjustment coefficient was added into original Phong illumination model. A variety of accelerated volume rendering algorithm and multithread technique were used to realize the fast rendering and real-time interactive control of 3D explosion fields. Cutaway view was implemented, so arbitrary section of 3D explosion fields can be seen conveniently. Slice can be extracted along three axes of 3D explosion fields, and the value at an arbitrary point on the slice can be gained. The experiment results show that the volume rendering acceleration algorithm can generate high quality images and can increase the speed of image generating, while achieve interactive control quickly.

  20. Cluster Analysis and Web-Based 3-D Visualization of Large-scale Geophysical Data

    NASA Astrophysics Data System (ADS)

    Kadlec, B. J.; Yuen, D. A.; Bollig, E. F.; Dzwinel, W.; da Silva, C. R.

    2004-05-01

    We present a problem-solving environment WEB-IS (Web-based Data Interrogative System), which we have developed for remote analysis and visualization of geophysical data [Garbow et. al., 2003]. WEB-IS employs agglomerative clustering methods intended for feature extraction and studying the predictions of large magnitude earthquake events. Data-mining is accomplished using a mutual nearest meighbor (MNN) algorithm for extracting event clusters of different density and shapes based on a hierarchical proximity measure. Clustering schemes used in molecular dynamics [Da Silva et. al., 2002] are also considered for increasing computational efficiency using a linked cell algorithm for creating a Verlet neighbor list (VNL) and extracting different cluster structures by applying a canonical backtracking search on the VNL. Space and time correlations between the events are visualized dynamically in 3-D through a filter by showing clusters at different timescales according to defined units of time ranging from days to years. This WEB-IS functionality was tested both on synthetic [Eneva and Ben-Zion, 1997] and actual earthquake catalogs of Japanese earthquakes and can be applied to the soft-computing data mining methods used in hydrology and geoinformatics. Da Silva, C.R.S., Justo, J.F., Fazzio, A., Phys Rev B, vol., 65, 2002. Eneva, M., Ben-Zion, Y.,J. Geophys. Res., 102, 17785-17795, 1997. Garbow, Z.A., Yuen, D.A., Erlebacher, G., Bollig, E.F., Kadlec, B.J., Vis. Geosci., 2003.

  1. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  2. Role of Interaction in Enhancing the Epistemic Utility of 3D Mathematical Visualizations

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2010-01-01

    Many epistemic activities, such as spatial reasoning, sense-making, problem solving, and learning, are information-based. In the context of epistemic activities involving mathematical information, learners often use interactive 3D mathematical visualizations (MVs). However, performing such activities is not always easy. Although it is generally…

  3. 2D but not 3D: pictorial-depth deficits in a case of visual agnosia.

    PubMed

    Turnbull, Oliver H; Driver, Jon; McCarthy, Rosaleen A

    2004-01-01

    Patients with visual agnosia exhibit acquired impairments in visual object recognition, that may or may not involve deficits in low-level perceptual abilities. Here we report a case (patient DM) who after head injury presented with object-recognition deficits. He still appears able to extract 2D information from the visual world in a relatively intact manner; but his ability to extract pictorial information about 3D object-structure is greatly compromised. His copying of line drawings is relatively good, and he is accurate and shows apparently normal mental rotation when matching or judging objects tilted in the picture-plane. But he performs poorly on a variety of tasks requiring 3D representations to be derived from 2D stimuli, including: performing mental rotation in depth, rather than in the picture-plane; judging the relative depth of two regions depicted in line-drawings of objects; and deciding whether a line-drawing represents an object that is 'impossible' in 3D. Interestingly, DM failed to show several visual illusions experienced by normals (Muller-Lyer and Ponzo), that some authors have attributed to pictorial depth cues. Taken together, these findings indicate a deficit in achieving 3D intepretations of objects from 2D pictorial cues, that may contribute to object-recognition problems in agnosia.

  4. System and method for 3D printing of aerogels

    DOEpatents

    Worsley, Marcus A.; Duoss, Eric; Kuntz, Joshua; Spadaccini, Christopher; Zhu, Cheng

    2016-03-08

    A method of forming an aerogel. The method may involve providing a graphene oxide powder and mixing the graphene oxide powder with a solution to form an ink. A 3D printing technique may be used to write the ink into a catalytic solution that is contained in a fluid containment member to form a wet part. The wet part may then be cured in a sealed container for a predetermined period of time at a predetermined temperature. The cured wet part may then be dried to form a finished aerogel part.

  5. Real-time 3D visualization of volumetric video motion sensor data

    SciTech Connect

    Carlson, J.; Stansfield, S.; Shawver, D.; Flachs, G.M.; Jordan, J.B.; Bao, Z.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to be immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.

  6. 3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution

    NASA Astrophysics Data System (ADS)

    Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David

    2013-01-01

    Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.

  7. Method for extracting the aorta from 3D CT images

    NASA Astrophysics Data System (ADS)

    Taeprasartsit, Pinyo; Higgins, William E.

    2007-03-01

    Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately, many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta, causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line Model Construction is done once using several representative human MDCT images and consists of the following steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the following operations: construct a likelihood image, perform global fitting of the precomputed models to the current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta. The region recovery method consists of two steps: model-based and region-growing steps. This region growing method can recover regions outside the model coverage and non-circular tube structures. In our experiments, we used three models and achieved satisfactory results on twelve of thirteen test cases.

  8. Physical sensor difference-based method and virtual sensor difference-based method for visual and quantitative estimation of lower limb 3D gait posture using accelerometers and magnetometers.

    PubMed

    Liu, Kun; Inoue, Yoshio; Shibata, Kyoko

    2012-01-01

    An approach using a physical sensor difference-based algorithm and a virtual sensor difference-based algorithm to visually and quantitatively confirm lower limb posture was proposed. Three accelerometers and two MAG(3)s (inertial sensor module) were used to measure the accelerations and magnetic field data for the calculation of flexion/extension (FE) and abduction/adduction (AA) angles of hip joint and FE, AA and internal/external rotation (IE) angles of knee joint; then, the trajectories of knee and ankle joints were obtained with the joint angles and segment lengths. There was no integration of acceleration or angular velocity for the joint rotations and positions, which is an improvement on the previous method in recent literature. Compared with the camera motion capture system, the correlation coefficients in five trials were above 0.91 and 0.92 for the hip FE and AA, respectively, and higher than 0.94, 0.93 and 0.93 for the knee joint FE, AA and IE, respectively.

  9. 3-D visualization and identification of biological microorganisms using partially temporal incoherent light in-line computational holographic imaging.

    PubMed

    Moon, Inkyu; Javidi, Bahram

    2008-12-01

    We present a new method for three-dimensional (3-D) visualization and identification of biological microorganisms using partially temporal incoherent light in-line (PTILI) computational holographic imaging and multivariate statistical methods. For 3-D data acquisition of biological microorganisms, the band-pass filtered white light is used to illuminate a biological sample. The transversely and longitudinally diffracted pattern of the biological sample is magnified by microscope objective (MO) and is optically recorded with an image sensor array interfaced with a computer. Three-dimensional reconstruction of the biological sample from the diffraction pattern is accomplished by using computational Fresnel propagation method. Principal components analysis and nonparametric inference algorithms are applied to the 3-D complex amplitude biological sample for identification purposes. Experiments indicate that the proposed system can be useful for identifying biological microorganisms. To the best of our knowledge, this is the first report on using PTILI computational holographic microscopy for identification of biological microorganisms.

  10. Photographing Internal Fractures of the Archaeological Statues with 3D Visualization of Ground Penetrating Radar Data

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.; Kadioglu, Y. K.

    2009-04-01

    PHOTOGRAPHING INTERNAL FRACTURES OF THE ARCHAEOLOGICAL STATUES WITH 3D VISUALIZATION OF GROUND PENETRATING RADAR DATA Selma KADIOGLU1 and Yusuf K. KADIOGLU2 1Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr 2Ankara University, Faculty of Engineering, Department of Geological Engineering, 06100 Tandogan/ANKARA-TURKEY kadi@eng.ankara.edu.tr The aim of the study is to illustrate a new approach to image the discontinuities in the archaeological statues before restoration studies using ground penetrating radar (GPR) method. The method was successfully applied to detect and map the fractures and cavities of the two monument groups and lion statues in Mustafa Kemal ATATURK's tumb (ANITKABIR) in Ankara-Turkey. The tumb, which has been started to build in 1944 and completed in 1953, represents Turkish people and Ataturk, who is founder of the Republic of Turkey. Therefore this monument is very important for Turkish people. The monument groups and lion statues have been built from travertine rocks. These travertine have vesicular textures with the percent of 12. They have been mainly composed of calcite, aragonite with rare amount of plant relict and clay minerals. The concentrations of Fe, Mg, Cl and Mn may lead to verify their colours changing from white through pale green to beige. The atmospheric contamination of Ankara has been caused to cover some parts of the surface of these travertine with a thin film of Pb as blackish in colour. The micro fractures have been observed specially at the rim of the vesicular of the rocks by the polarizing microscope. Parallel two dimensional (2D) GPR profile data with 10cm profile space were acquired by RAMAC CU II system with 1600 MHz shielded antenna on the monument groups (three women, three men and 24 lion statues) and then a three dimensional (3D) data volume were built using parallel 2D GPR data. Air-filled fractures and cavities in the

  11. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  12. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks.

  13. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates.

  14. Optical clearing based cellular-level 3D visualization of intact lymph node cortex

    PubMed Central

    Song, Eunjoo; Seo, Howon; Choe, Kibaek; Hwang, Yoonha; Ahn, Jinhyo; Ahn, Soyeon; Kim, Pilhan

    2015-01-01

    Lymph node (LN) is an important immune organ that controls adaptive immune responses against foreign pathogens and abnormal cells. To facilitate efficient immune function, LN has highly organized 3D cellular structures, vascular and lymphatic system. Unfortunately, conventional histological analysis relying on thin-sliced tissue has limitations in 3D cellular analysis due to structural disruption and tissue loss in the processes of fixation and tissue slicing. Optical sectioning confocal microscopy has been utilized to analyze 3D structure of intact LN tissue without physical tissue slicing. However, light scattering within biological tissues limits the imaging depth only to superficial portion of LN cortex. Recently, optical clearing techniques have shown enhancement of imaging depth in various biological tissues, but their efficacy for LN are remained to be investigated. In this work, we established optical clearing procedure for LN and achieved 3D volumetric visualization of the whole cortex of LN. More than 4 times improvement in imaging depth was confirmed by using LN obtained from H2B-GFP/actin-DsRed double reporter transgenic mouse. With adoptive transfer of GFP expressing B cells and DsRed expressing T cells and fluorescent vascular labeling by anti-CD31 and anti-LYVE-1 antibody conjugates, we successfully visualized major cellular-level structures such as T-cell zone, B-cell follicle and germinal center. Further, we visualized the GFP expressing metastatic melanoma cell colony, vasculature and lymphatic vessels in the LN cortex. PMID:26504662

  15. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. PMID:26488641

  16. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points

  17. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  18. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  19. Unified framework for generation of 3D web visualization for mechatronic systems

    NASA Astrophysics Data System (ADS)

    Severa, O.; Goubej, M.; Konigsmarkova, J.

    2015-11-01

    The paper deals with development of a unified framework for generation of 3D visualizations of complex mechatronic systems. It provides a high-fidelity representation of executed motion by allowing direct employment of a machine geometry model acquired from a CAD system. Open-architecture multi-platform solution based on latest web standards is achieved by utilizing a web browser as a final 3D renderer. The results are applicable both for simulations and development of real-time human machine interfaces. Case study of autonomous underwater vehicle control is provided to demonstrate the applicability of the proposed approach.

  20. Real-time visualization of 3-D dynamic microscopic objects using optical diffraction tomography.

    PubMed

    Kim, Kyoohyun; Kim, Kyung Sang; Park, Hyunjoo; Ye, Jong Chul; Park, Yongkeun

    2013-12-30

    3-D refractive index (RI) distribution is an intrinsic bio-marker for the chemical and structural information about biological cells. Here we develop an optical diffraction tomography technique for the real-time reconstruction of 3-D RI distribution, employing sparse angle illumination and a graphic processing unit (GPU) implementation. The execution time for the tomographic reconstruction is 0.21 s for 96(3) voxels, which is 17 times faster than that of a conventional approach. We demonstrated the real-time visualization capability with imaging the dynamics of Brownian motion of an anisotropic colloidal dimer and the dynamic shape change in a red blood cell upon shear flow.

  1. Texture-based visualization of unsteady 3D flow by real-time advection and volumetric illumination.

    PubMed

    Weiskopf, Daniel; Schafhitzel, Tobias; Ertl, Thomas

    2007-01-01

    This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.

  2. A New Navigation Method for 3D Virtual Environment Exploration

    NASA Astrophysics Data System (ADS)

    Haydar, Mahmoud; Maidi, Madjid; Roussel, David; Mallem, Malik

    2009-03-01

    Navigation in virtual environments is a complex task which imposes a high cognitive load on the user. It consists on maintaining knowledge of current position and orientation of the user while he moves through the space. In this paper, we present a novel approach for navigation in 3D virtual environments. The method is based on the principle of skiing, and the idea is to provide to the user a total control of his navigation speed and rotation using his two hands. This technique enables user-steered exploration by determining the direction and the speed of motion using the knowledge of the positions of the user hands. A module of speed control is included to the technique to easily control the speed using the angle between the hands. The direction of motion is given by the orthogonal axis of the segment joining the two hands. A user study will show the efficiency of the method in performing exploration tasks in complex 3D large-scale environments. Furthermore, we proposed an experimental protocol to prove that this technique presents a high level of navigation guidance and control, achieving significantly better performance in comparison to simple navigation techniques.

  3. The development of a 3D risk analysis method.

    PubMed

    I, Yet-Pole; Cheng, Te-Lung

    2008-05-01

    Much attention has been paid to the quantitative risk analysis (QRA) research in recent years due to more and more severe disasters that have happened in the process industries. Owing to its calculation complexity, very few software, such as SAFETI, can really make the risk presentation meet the practice requirements. However, the traditional risk presentation method, like the individual risk contour in SAFETI, is mainly based on the consequence analysis results of dispersion modeling, which usually assumes that the vapor cloud disperses over a constant ground roughness on a flat terrain with no obstructions and concentration fluctuations, which is quite different from the real situations of a chemical process plant. All these models usually over-predict the hazardous regions in order to maintain their conservativeness, which also increases the uncertainty of the simulation results. On the other hand, a more rigorous model such as the computational fluid dynamics (CFD) model can resolve the previous limitations; however, it cannot resolve the complexity of risk calculations. In this research, a conceptual three-dimensional (3D) risk calculation method was proposed via the combination of results of a series of CFD simulations with some post-processing procedures to obtain the 3D individual risk iso-surfaces. It is believed that such technique will not only be limited to risk analysis at ground level, but also be extended into aerial, submarine, or space risk analyses in the near future.

  4. A 3D Vector/Scalar Visualization and Particle Tracking Package

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively placemore » injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.« less

  5. A 3D Vector/Scalar Visualization and Particle Tracking Package

    SciTech Connect

    Freitag, Lori; Disz, Terry; Papka, Mike; Heath, Daniel; Diachin, Darin; Herzog, Jim; Ryan, and Bob

    1999-08-19

    BOILERMAKER is an interactive visualization system consisting of three components: a visualization component, a particle tracking component, and a communication layer. The software, to date, has been used primarily in the visualization of vector and scalar fields associated with computational fluid dynamics (CFD) models of flue gas flows in industrial boilers and incinerators. Users can interactively request and toggle static vector fields, dynamic streamlines, and flowing vector fields. In addition, the user can interactively place injector nozzles on boiler walls and visualize massed, evaporating sprays emanating from them. Some characteristics of the spray can be adjusted from within the visualization environment including spray shape and particle size. Also included with this release is software that supports 3D menu capabilities, scrollbars, communication and navigation.

  6. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  7. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  8. Visualization of 3D geometric models of the breast created from contrast-enhanced MRI

    NASA Astrophysics Data System (ADS)

    Leader, J. Ken, III; Wang, Xiao Hui; Chang, Yuan-Hsiang; Chapman, Brian E.

    2002-05-01

    Contrast enhanced breast MRI is currently used as an adjuvant modality to x-ray mammography because of its ability to resolve ambiguities and determine the extent of malignancy. This study described techniques to create and visualize 3D geometric models of abnormal breast tissue. MRIs were performed on a General Electric 1.5 Tesla scanner using dual phased array breast coils. Image processing tasks included: 1) correction of image inhomogeneity caused by the coils, 2) segmentation of normal and abnormal tissue, and 3) modeling and visualization of the segmented tissue. The models were visualized using object-based surface rendering which revealed characteristics critical to differentiating benign from malignant tissue. Surface rendering illustrated the enhancement distribution and enhancement patterns. The modeling process condensed the multi-slice MRI data information and standardized its interpretation. Visualizing the 3D models should improve the radiologist's and/or surgeon's impression of the 3D shape, extent, and accessibility of the malignancy compared to viewing breast MRI data slice by slice.

  9. Fusion of CTA and XA data using 3D centerline registration for plaque visualization during coronary intervention

    NASA Astrophysics Data System (ADS)

    Kaila, Gaurav; Kitslaar, Pieter; Tu, Shengxian; Penicka, Martin; Dijkstra, Jouke; Lelieveldt, Boudewijn

    2016-03-01

    Coronary Artery Disease (CAD) results in the buildup of plaque below the intima layer inside the vessel wall of the coronary arteries causing narrowing of the vessel and obstructing blood flow. Percutaneous coronary intervention (PCI) is usually done to enlarge the vessel lumen and regain back normal flow of blood to the heart. During PCI, X-ray imaging is done to assist guide wire movement through the vessels to the area of stenosis. While X-ray imaging allows for good lumen visualization, information on plaque type is unavailable. Also due to the projection nature of the X-ray imaging, additional drawbacks such as foreshortening and overlap of vessels limit the efficacy of the cardiac intervention. Reconstruction of 3D vessel geometry from biplane X-ray acquisitions helps to overcome some of these projection drawbacks. However, the plaque type information remains an issue. In contrast, imaging using computed tomography angiography (CTA) can provide us with information on both lumen and plaque type and allows us to generate a complete 3D coronary vessel tree unaffected by the foreshortening and overlap problems of the X-ray imaging. In this paper, we combine x-ray biplane images with CT angiography to visualize three plaque types (dense calcium, fibrous fatty and necrotic core) on x-ray images. 3D registration using three different registration methods is done between coronary centerlines available from x-ray images and from the CTA volume along with 3D plaque information available from CTA. We compare the different registration methods and evaluate their performance based on 3D root mean squared errors. Two methods are used to project this 3D information onto 2D plane of the x-ray biplane images. Validation of our approach is performed using artificial biplane x-ray datasets.

  10. Remote Visualization and Navigation of 3d Models of Archeological Sites

    NASA Astrophysics Data System (ADS)

    Callieri, M.; Dellepiane, M.; Scopigno, R.

    2015-02-01

    The remote visualization and navigation of 3D data directly inside the web browser is becoming a viable option, due to the recent efforts in standardizing the components for 3D rendering on the web platform. Nevertheless, handling complex models may be a challenge, especially when a more generic solution is needed to handle different cases. In particular, archeological and architectural models are usually hard to handle, since their navigation can be managed in several ways, and a completely free navigation may be misleading and not realistic. In this paper we present a solution for the remote navigation of these dataset in a WebGL component. The navigation has two possible modes: the "bird's eye" mode, where the user is able to see the model from above, and the "first person" mode, where the user can move inside the structure. The two modalities are linked by a point of interest, that helps the user to control the navigation in an intuitive fashion. Since the terrain may not be flat, and the architecture may be complex, it's necessary to handle these issues, possibly without implementing complex mesh-based collision mechanisms. Hence, a complete navigation is obtained by storing the height and collision information in an image, which provides a very simple source of data. Moreover, the same image-based approach can be used to store additional information that could enhance the navigation experience. The method has been tested in two complex test cases, showing that a simple yet powerful interaction can be obtained with limited pre-processing of data.

  11. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of partsmore » that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.« less

  12. RUNTHRU6.0. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    Janucik, F.X.; Ross, D.M.; Sischo, K.F.

    1997-01-01

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  13. Translation, Enhancement, Filtering, and Visualization of Large 3D Triangle Mesh

    SciTech Connect

    1997-04-21

    The runthru system consists of five programs: workcell filter, just do it, transl8g, decim8, and runthru. The workcell filter program is useful if the source of your 3D triangle mesh model is IGRIP. It will traverse a directory structure of Deneb IGRIP files and filter out any IGRIP part files that are not referenced by an accompanying IGRIP work cell file. The just do it program automates translating and/or filtering of large numbers of parts that are organized in hierarchical directory structures. The transl8g program facilitates the interchange, topology generation, error checking, and enhancement of large 3D triangle meshes. Such data is frequently used to represent conceptual designs, scientific visualization volume modeling, or discrete sample data. Interchange is provided between several popular commercial and defacto standard geometry formats. Error checking is included to identify duplicate and zero area triangles. Model engancement features include common vertex joining, consistent triangle vertex ordering, vertex noemal vector averaging, and triangle strip generation. Many of the traditional O(n2) algorithms required to provide the above features have been recast and are o(nlog(n)) which support large mesh sizes. The decim8 program is based on a data filter algorithm that significantly reduces the number of triangles required to represent 3D models of geometry, scientific visualization results, and discretely sampled data. It eliminates local patches of triangles whose geometries are not appreciably different and replaces them with fewer, larger triangles. The algorithm has been used to reduce triangles in large conceptual design models to facilitate virtual walk throughs and to enable interactive viewing of large 3D iso-surface volume visualizations. The runthru program provides high performance interactive display and manipulation of 3D triangle mesh models.

  14. Visualization of 3D osteon morphology by synchrotron radiation micro-CT

    PubMed Central

    Cooper, D M L; Erickson, B; Peele, A G; Hannah, K; Thomas, C D L; Clement, J G

    2011-01-01

    Cortical bone histology has been the subject of scientific inquiry since the advent of the earliest microscopes. Histology – literally the study of tissue – is a field nearly synonymous with 2D thin sections. That said, progressive developments in high-resolution X-ray imaging are enabling 3D visualization to reach ever smaller structures. Micro-computed tomography (micro-CT), employing conventional X-ray sources, has become the gold standard for 3D analysis of trabecular bone and is capable of detecting the structure of vascular (osteonal) porosity in cortical bone. To date, however, direct 3D visualization of secondary osteons has eluded micro-CT based upon absorption-derived contrast. Synchrotron radiation micro-CT, through greater image quality, resolution and alternative contrast mechanisms (e.g. phase contrast), holds great potential for non-destructive 3D visualization of secondary osteons. Our objective was to demonstrate this potential and to discuss areas of bone research that can be advanced through the application of this approach. We imaged human mid-femoral cortical bone specimens derived from a 20-year-old male (Melbourne Femur Collection) at the Advanced Photon Source synchrotron (Chicago, IL, USA) using the 2BM beam line. A 60-mm distance between the target and the detector was employed to enhance visualization of internal structures through propagation phase contrast. Scan times were 1 h and images were acquired with 1.4-μm nominal isotropic resolution. Computer-aided manual segmentation and volumetric 3D rendering were employed to visualize secondary osteons and porous structures, respectively. Osteonal borders were evident via two contrast mechanisms. First, relatively new (hypomineralized) osteons were evident due to differences in X-ray attenuation relative to the surrounding bone. Second, osteon boundaries (cement lines) were delineated by phase contrast. Phase contrast also enabled the detection of soft tissue remnants within the

  15. Suitability of online 3D visualization technique in oil palm plantation management

    NASA Astrophysics Data System (ADS)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  16. 3D visualization environment for analysis of telehealth indicators in public health.

    PubMed

    Filho, Amadeu S Campos; Novaes, Magdala A; Gomes, Alex S

    2013-01-01

    With the growth of telehealth applications and the need for public health managers to have tools that facilitate visualization of indicators produced by telehealth services arose the need to have simple systems to better planning the interventions. Furthermore, Health systems are considers difficult in order to visualize the right information by many health professionals [1] because of the complexity of its Graphical User Interface (GUI) and the high cognitive load needed to handle it. To overcome this problem, we have proposed a 3D environment for the analysis of telehealth indicators in public health by managers of public health sites. Users who will use the environment are part of public health manager of family health sites that participate of Network of Telehealth Centers of Pernambuco (RedeNUTES) [2] that is part of Brazil telehealth program. This paper aims to present a 3D environment for analysis of telehealth indicators by public health manager.

  17. Ergodic theory and experimental visualization of chaos in 3D flows

    NASA Astrophysics Data System (ADS)

    Sotiropoulos, Fotis; Mezic, Igor

    2000-11-01

    In his motivation for the ergodic hypothesis Gibbs invoked an analogy with fluid mixing: “…Yet no fact is more familiar to us than that stirring tends to bring a liquid to a state of uniform mixture, or uniform densities of its components…”. Although proof of the ergodic hypothesis is possible only for the simplest of systems using methods from ergodic theory, the use of the hypothesis has led to many accurate predictions in statistical mechanics. The problem of fluid mixing, however, turned out to be considerably more complicated than Gibbs envisioned. Chaotic advection can indeed lead to efficient mixing even in non-turbulent flows, but many non-mixed islands are known to persist within well-mixed regions. In numerical studies, Poincaré maps can be used to reveal the structure of such islands but their visualization in the laboratory requires laborious experimental procedures and is possible only for certain types of flows. Here we propose the first non-intrusive, simple to implement, and generally applicable technique for constructing experimental Poincaré maps and apply it to a steady, 3D, vortex breakdown bubble. We employ standard laser-induced fluorescence (LIF) and construct Poincaré maps by time averaging a sufficiently long sequence of instantaneous LIF images. We also show that ergodic theory methods provide a rigorous theoretical justification for this approach whose main objective is to reveal the non-ergodic regions of the flow.

  18. The COMET method in 3-D hexagonal geometry

    SciTech Connect

    Connolly, K. J.; Rahnema, F.

    2012-07-01

    The hybrid stochastic-deterministic coarse mesh radiation transport (COMET) method developed at Georgia Tech now solves reactor core problems in 3-D hexagonal geometry. In this paper, the method is used to solve three preliminary test problems designed to challenge the method with steep flux gradients, high leakage, and strong asymmetry and heterogeneity in the core. The test problems are composed of blocks taken from a high temperature test reactor benchmark problem. As the method is still in development, these problems and their results are strictly preliminary. Results are compared to whole core Monte Carlo reference solutions in order to verify the method. Relative errors are on the order of 50 pcm in core eigenvalue, and mean relative error in pin fission density calculations is less than 1% in these difficult test cores. The method requires the one-time pre-computation of a response expansion coefficient library, which may be compiled in a comparable amount of time to a single whole core Monte Carlo calculation. After the library has been computed, COMET may solve any number of core configurations on the order of an hour, representing a significant gain in efficiency over other methods for whole core transport calculations. (authors)

  19. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  20. Image processing and 3D visualization in the interpretation of patterned injury of the skin

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1995-09-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.

  1. Real-Time Modeling and 3D Visualization of Source Dynamics and Connectivity Using Wearable EEG

    PubMed Central

    Mullen, Tim; Kothe, Christian; Chi, Yu Mike; Ojeda, Alejandro; Kerth, Trevor; Makeig, Scott; Cauwenberghs, Gert; Jung, Tzyy-Ping

    2014-01-01

    This report summarizes our recent efforts to deliver real-time data extraction, preprocessing, artifact rejection, source reconstruction, multivariate dynamical system analysis (including spectral Granger causality) and 3D visualization as well as classification within the open-source SIFT and BCILAB toolboxes. We report the application of such a pipeline to simulated data and real EEG data obtained from a novel wearable high-density (64-channel) dry EEG system. PMID:24110155

  2. The 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Maffeo, R. J.; Tipton, M. T.; Weber, G.

    1992-01-01

    A two-year program to develop advanced 3D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades, and vanes is described. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulation models were developed: an eight-noded midsurface shell element; a nine-noded midsurface shell element; and a twenty-noded isoparametric solid element. A separate computer program has been developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.

  3. The 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Dame, L. T.; Mcknight, R. L.

    1983-01-01

    The objective of this research is to develop an analytical tool capable of economically evaluating the cyclic time dependent plasticity which occurs in hot section engine components in areas of strain concentration resulting from the combination of both mechanical and thermal stresses. The techniques developed must be capable of accommodating large excursions in temperatures with the associated variations in material properties including plasticity and creep. The overall objective of this proposed program is to develop advanced 3-D inelastic structural/stress analysis methods and solution strategies for more accurate and yet more cost effective analysis of combustors, turbine blades, and vanes. The approach will be to develop four different theories, one linear and three higher order with increasing complexities including embedded singularities.

  4. On 3D inelastic analysis methods for hot section components

    NASA Technical Reports Server (NTRS)

    Mcknight, R. L.; Chen, P. C.; Dame, L. T.; Holt, R. V.; Huang, H.; Hartle, M.; Gellin, S.; Allen, D. H.; Haisler, W. E.

    1986-01-01

    Accomplishments are described for the 2-year program, to develop advanced 3-D inelastic structural stress analysis methods and solution strategies for more accurate and cost effective analysis of combustors, turbine blades and vanes. The approach was to develop a matrix of formulation elements and constitutive models. Three constitutive models were developed in conjunction with optimized iterating techniques, accelerators, and convergence criteria within a framework of dynamic time incrementing. Three formulations models were developed; an eight-noded mid-surface shell element, a nine-noded mid-surface shell element and a twenty-noded isoparametric solid element. A separate computer program was developed for each combination of constitutive model-formulation model. Each program provides a functional stand alone capability for performing cyclic nonlinear structural analysis. In addition, the analysis capabilities incorporated into each program can be abstracted in subroutine form for incorporation into other codes or to form new combinations.

  5. Correlation between a perspective distortion in a S3D content and the visual discomfort perceived

    NASA Astrophysics Data System (ADS)

    Doyen, D.; Sacré, J.-J.; Blondé, L.

    2012-03-01

    Perspective distortion will occur in stereoscopic 3D (S3D) when the relative disparity between elements generates a depth not in accordance with the relative size of the presented objects. Subjective tests have been conducted using test sequences where shooting parameters are perfectly known and where vergence/accommodation conflict is not predominant. Perspective distortions will occur with some of the sequences, depending on viewing conditions. People were asked to qualify sequences in term of naturalness and visual comfort. Results of test revealed a clear correlation between perspective conflict and visual discomfort perceived. Whatever the shooting condition, parallel or toed-in cameras, results are similar. A factor between depth and perspective can be calculated for each shooting configuration and viewing condition. This factor seems a relevant indicator to evaluate the comfort of S3D content perception. Subjective tests allowed to better understand the link between perspective conflicts and visual comfort. Next, studies will be conducted to extend these tests to cinema conditions were the range of viewing conditions is larger.

  6. UCVM: An Open Source Software Package for Querying and Visualizing 3D Velocity Models

    NASA Astrophysics Data System (ADS)

    Gill, D.; Small, P.; Maechling, P. J.; Jordan, T. H.; Shaw, J. H.; Plesch, A.; Chen, P.; Lee, E. J.; Taborda, R.; Olsen, K. B.; Callaghan, S.

    2015-12-01

    Three-dimensional (3D) seismic velocity models provide foundational data for ground motion simulations that calculate the propagation of earthquake waves through the Earth. The Southern California Earthquake Center (SCEC) has developed the Unified Community Velocity Model (UCVM) package for both Linux and OS X. This unique framework provides a cohesive way for querying and visualizing 3D models. UCVM v14.3.0, supports many Southern California velocity models including CVM-S4, CVM-H 11.9.1, and CVM-S4.26. The last model was derived from 26 full-3D tomographic iterations on CVM-S4. Recently, UCVM has been used to deliver a prototype of a new 3D model of central California (CCA) also based on full-3D tomographic inversions. UCVM was used to provide initial plots of this model and will be used to deliver CCA to users when the model is publicly released. Visualizing models is also possible with UCVM. Integrated within the platform are plotting utilities that can generate 2D cross-sections, horizontal slices, and basin depth maps. UCVM can also export models in NetCDF format for easy import into IDV and ParaView. UCVM has also been prototyped to export models that are compatible with IRIS' new Earth Model Collaboration (EMC) visualization utility. This capability allows for user-specified horizontal slices and cross-sections to be plotted in the same 3D Earth space. UCVM was designed to help a wide variety of researchers. It is currently being use to generate velocity meshes for many SCEC wave propagation codes, including AWP-ODC-SGT and Hercules. It is also used to provide the initial input to SCEC's CyberShake platform. For those interested in specific data points, the software framework makes it easy to extract P and S wave propagation speeds and other material properties from 3D velocity models by providing a common interface through which researchers can query earth models for a given location and depth. Also included in the last release was the ability to add small

  7. Methods of constructing a 3D geological model from scatter data

    SciTech Connect

    Horsman, J.; Bethel, W.

    1995-04-01

    Most geoscience applications, such as assessment of an oil reservoir or hazardous waste site, require geological characterization of the site. Geological characterization involves analysis of spatial distributions of lithology, porosity, etc. Because of the complexity of the spatial relationships, the authors find that a 3-D model of geology is better suited for integration of many different types of data and provides a better representation of a site than a 2-D one. A 3-D model of geology is constructed from sample data obtained from field measurements, which are usually scattered. To create a volume model from scattered data, interpolation between points is required. The interpolation can be computed using one of several computational algorithms. Alternatively, a manual method may be employed, in which an interactive graphics device is used to input by hand the information that lies between the data points. For example, a mouse can be used to draw lines connecting data points with equal values. The combination of these two methods presents yet another approach. In this study, the authors will compare selected methods of 3-D geological modeling, They used a flow-based, modular visualization environment (AVS) to construct the geological models computationally. Within this system, they used three modules, scat{_}3d, trivar and scatter{_}to{_}ucd, as examples of computational methods. They compare these methods to the combined manual and computational approach. Because there are no tools readily available in AVS for this type of construction, they used a geological modeling system to demonstrate this method.

  8. Multimodal 3-D reconstruction of human anatomical structures using SurLens Visualization System.

    PubMed

    Adeshina, A M; Hashim, R; Khalid, N E A; Abidin, S Z Z

    2013-03-01

    In the medical diagnosis and treatment planning, radiologists and surgeons rely heavily on the slices produced by medical imaging devices. Unfortunately, these image scanners could only present the 3-D human anatomical structure in 2-D. Traditionally, this requires medical professional concerned to study and analyze the 2-D images based on their expert experience. This is tedious, time consuming and prone to error; expecially when certain features are occluding the desired region of interest. Reconstruction procedures was earlier proposed to handle such situation. However, 3-D reconstruction system requires high performance computation and longer processing time. Integrating efficient reconstruction system into clinical procedures involves high resulting cost. Previously, brain's blood vessels reconstruction with MRA was achieved using SurLens Visualization System. However, adapting such system to other image modalities, applicable to the entire human anatomical structures, would be a meaningful contribution towards achieving a resourceful system for medical diagnosis and disease therapy. This paper attempts to adapt SurLens to possible visualisation of abnormalities in human anatomical structures using CT and MR images. The study was evaluated with brain MR images from the department of Surgery, University of North Carolina, United States and CT abdominal pelvic, from the Swedish National Infrastructure for Computing. The MR images contain around 109 datasets each of T1-FLASH, T2-Weighted, DTI and T1-MPRAGE. Significantly, visualization of human anatomical structure was achieved without prior segmentation. SurLens was adapted to visualize and display abnormalities, such as an indication of walderstrom's macroglobulinemia, stroke and penetrating brain injury in the human brain using Magentic Resonance (MR) images. Moreover, possible abnormalities in abdominal pelvic was also visualized using Computed Tomography (CT) slices. The study shows SurLens' functionality as

  9. Lattice Boltzmann Method for 3-D Flows with Curved Boundary

    NASA Technical Reports Server (NTRS)

    Mei, Renwei; Shyy, Wei; Yu, Dazhi; Luo, Li-Shi

    2002-01-01

    In this work, we investigate two issues that are important to computational efficiency and reliability in fluid dynamics applications of the lattice, Boltzmann equation (LBE): (1) Computational stability and accuracy of different lattice Boltzmann models and (2) the treatment of the boundary conditions on curved solid boundaries and their 3-D implementations. Three athermal 3-D LBE models (D3QI5, D3Ql9, and D3Q27) are studied and compared in terms of efficiency, accuracy, and robustness. The boundary treatment recently developed by Filippova and Hanel and Met et al. in 2-D is extended to and implemented for 3-D. The convergence, stability, and computational efficiency of the 3-D LBE models with the boundary treatment for curved boundaries were tested in simulations of four 3-D flows: (1) Fully developed flows in a square duct, (2) flow in a 3-D lid-driven cavity, (3) fully developed flows in a circular pipe, and (4) a uniform flow over a sphere. We found that while the fifteen-velocity 3-D (D3Ql5) model is more prone to numerical instability and the D3Q27 is more computationally intensive, the 63Q19 model provides a balance between computational reliability and efficiency. Through numerical simulations, we demonstrated that the boundary treatment for 3-D arbitrary curved geometry has second-order accuracy and possesses satisfactory stability characteristics.

  10. Reconstructing the Curve-Skeletons of 3D Shapes Using the Visual Hull.

    PubMed

    Livesu, Marco; Guggeri, Fabio; Scateni, Riccardo

    2012-11-01

    Curve-skeletons are the most important descriptors for shapes, capable of capturing in a synthetic manner the most relevant features. They are useful for many different applications: from shape matching and retrieval, to medical imaging, to animation. This has led, over the years, to the development of several different techniques for extraction, each trying to comply with specific goals. We propose a novel technique which stems from the intuition of reproducing what a human being does to deduce the shape of an object holding it in his or her hand and rotating. To accomplish this, we use the formal definitions of epipolar geometry and visual hull. We show how it is possible to infer the curve-skeleton of a broad class of 3D shapes, along with an estimation of the radii of the maximal inscribed balls, by gathering information about the medial axes of their projections on the image planes of the stereographic vision. It is definitely worth to point out that our method works indifferently on (even unoriented) polygonal meshes, voxel models, and point clouds. Moreover, it is insensitive to noise, pose-invariant, resolution-invariant, and robust when applied to incomplete data sets.

  11. Optoacoustic 3D visualization of changes in physiological properties of mouse tissues from live to postmortem

    NASA Astrophysics Data System (ADS)

    Su, Richard; Ermiliov, Sergey A.; Liopo, Anton V.; Oraevsky, Alexander A.

    2012-02-01

    Using the method of 3D optoacoustic tomography, we studied changes in tissues of the whole body of nude mice as the changes manifested themselves from live to postmortem. The studies provided the necessary baseline for optoacoustic imaging of necrotizing tissue, acute and chronic hypoxia, and reperfusion. They also establish a new optoacoustic model of early postmortem conditions of the whole mouse body. Animals were scanned in a 37°C water bath using a three-dimensional optoacoustic tomography system previously shown to provide high contrast maps of vasculature and organs based on changes in the optical absorbance. The scans were performed right before, 5 minutes after, 2 hours and 1 day after a lethal injection of KCl. The near-infrared laser wavelength of 765 nm was used to evaluate physiological features of postmortem changes. Our data showed that optoacoustic imaging is well suited for visualization of both live and postmortem tissues. The images revealed changes of optical properties in mouse organs and tissues. Specifically, we observed improvements in contrast of the vascular network and organs after the death of the animal. We associated these with reduced optical scattering, loss of motion artifacts, and blood coagulation.

  12. Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.

    2016-06-01

    Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  13. Interactive Visualization of 3-D Mantle Convection Extended Through AJAX Applications

    NASA Astrophysics Data System (ADS)

    McLane, J. C.; Czech, W.; Yuen, D.; Greensky, J.; Knox, M. R.

    2008-12-01

    We have designed a new software system for real-time interactive visualization of results taken directly from large-scale simulations of 3-D mantle convection and other large-scale simulations. This approach allows for intense visualization sessions for a couple of hours as opposed to storing massive amounts of data in a storage system. Our data sets consist of 3-D data for volume rendering with over 10 million unknowns at each timestep. Large scale visualization on a display wall holding around 13 million pixels has already been accomplished with extension to hand-held devices, such as the OQO and Nokia N800 and recently the iPHONE. We are developing web-based software in Java to extend the use of this system across long distances. The software is aimed at creating an interactive and functional application capable of running on multiple browsers by taking advantage of two AJAX-enabled web frameworks: Echo2 and Google Web Toolkit. The software runs in two modes allowing for a user to control an interactive session or observe a session controlled by another user. Modular build of the system allows for components to be swapped out for new components so that other forms of visualization could be accommodated such as Molecular Dynamics in mineral physics or 2-D data sets from lithospheric regional models.

  14. Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology

    PubMed Central

    Fai, Stephen; Bennett, Steffany A.L.

    2010-01-01

    The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This

  15. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  16. Analysis and Visualization of 3D Motion Data for UPDRS Rating of Patients with Parkinson's Disease.

    PubMed

    Piro, Neltje E; Piro, Lennart K; Kassubek, Jan; Blechschmidt-Trapp, Ronald A

    2016-01-01

    Remote monitoring of Parkinson's Disease (PD) patients with inertia sensors is a relevant method for a better assessment of symptoms. We present a new approach for symptom quantification based on motion data: the automatic Unified Parkinson Disease Rating Scale (UPDRS) classification in combination with an animated 3D avatar giving the neurologist the impression of having the patient live in front of him. In this study we compared the UPDRS ratings of the pronation-supination task derived from: (a) an examination based on video recordings as a clinical reference; (b) an automatically classified UPDRS; and (c) a UPDRS rating from the assessment of the animated 3D avatar. Data were recorded using Magnetic, Angular Rate, Gravity (MARG) sensors with 15 subjects performing a pronation-supination movement of the hand. After preprocessing, the data were classified with a J48 classifier and animated as a 3D avatar. Video recording of the movements, as well as the 3D avatar, were examined by movement disorder specialists and rated by UPDRS. The mean agreement between the ratings based on video and (b) the automatically classified UPDRS is 0.48 and with (c) the 3D avatar it is 0.47. The 3D avatar is similarly suitable for assessing the UPDRS as video recordings for the examined task and will be further developed by the research team. PMID:27338400

  17. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  18. Visualization of anthropometric measures of workers in computer 3D modeling of work place.

    PubMed

    Mijović, B; Ujević, D; Baksa, S

    2001-12-01

    In this work, 3D visualization of a work place by means of a computer-made 3D-machine model and computer animation of a worker have been performed. By visualization of 3D characters in inverse kinematic and dynamic relation with the operating part of a machine, the biomechanic characteristics of worker's body have been determined. The dimensions of a machine have been determined by an inspection of technical documentation as well as by direct measurements and recordings of the machine by camera. On the basis of measured body height of workers all relevant anthropometric measures have been determined by a computer program developed by the authors. By knowing the anthropometric measures, the vision fields and the scope zones while forming work places, exact postures of workers while performing technological procedures were determined. The minimal and maximal rotation angles and the translation of upper and lower arm which are basis for the analysis of worker burdening were analyzed. The dimensions of the seized space of a body are obtained by computer anthropometric analysis of movement, e.g. range of arms, position of legs, head, back. The influence of forming of a work place on correct postures of workers during work has been reconsidered and thus the consumption of energy and fatigue can be reduced to a minimum. PMID:11811295

  19. Interactive 3D Visualization of Humboldt Bay Bridge Earthquake Simulation With High Definition Stereo Output

    NASA Astrophysics Data System (ADS)

    Ang, P. B.; Nayak, A.; Yan, J.; Elgamal, A.

    2006-12-01

    This visualization project involves the study of the Humboldt Bay Middle Channel Bridge, a Pacific Earthquake Engineering Research (PEER) testbed site, subjected to an earthquake simulated by the Department of Structural Engineering, UCSD. The numerical simulation and data generation was carried out using the OpenSees finite element analysis platform, and GiD was employed for the mesh generation in preprocessing. In collaboration with the Scripps Visualization Center, the data was transformed into a virtual 3D world that a viewer could rotate around, zoom into, pan about, step through each timestep or examine in true stereo. The data consists of the static mesh of the bridge-foundation-ground elements, material indices for each type of element, the displacement amount of each element nodes over time, and the shear stress levels for each ground element over time. The Coin3D C++ Open Inventor API was used to parse the data and to render the bridge system in full 3D at 1130 individual time steps to show how the bridge structure and the surrounding soil elements interact during the full course of an earthquake. The results can be viewed interactively while using the program, saved as images and processed into animated movies, in resolutions as high as High Definition (1920x1080), or in stereo modes such as red-blue anaglyph.

  20. XML-based 3D model visualization and simulation framework for dynamic models

    NASA Astrophysics Data System (ADS)

    Kim, Taewoo; Fishwick, Paul A.

    2002-07-01

    Relatively recent advances in computer technology enable us to create three-dimensional (3D) dynamic models and simulate them within a 3D web environment. The use of such models is especially valuable when teaching simulation, and the concepts behind dynamic models, since the models are made more accessible to the students. Students tend to enjoy a construction process in which they are able to employ their own cultural and aesthetic forms. The challenge is to create a language that allows for a grammar for modeling, while simultaneously permitting arbitrary presentation styles. For further flexibility, we need an effective way to represent and simulate dynamic models that can be shared by modelers over the Internet. We present an Extensible Markup Language (XML)-based framework that will guide a modeler in creating personalized 3D models, visualizing its dynamic behaviors, and simulating the created models. A model author will use XML files to represent geometries and topology of a dynamic model. Model Fusion Engine, written in Extensible Stylesheet Language Transformation (XSLT), expedites the modeling process by automating the creation of dynamic models with the user-defined XML files. Modelers can also link simulation programs with a created model to analyze the characteristics of the model. The advantages of this system lie in the education of modeling and simulating dynamic models, and in the exploitation of visualizing the dynamic model behaviors.

  1. Efficient Structure-Aware Selection Techniques for 3D Point Cloud Visualizations with 2DOF Input.

    PubMed

    Yu, Lingyun; Efstathiou, K; Isenberg, P; Isenberg, T

    2012-12-01

    Data selection is a fundamental task in visualization because it serves as a pre-requisite to many follow-up interactions. Efficient spatial selection in 3D point cloud datasets consisting of thousands or millions of particles can be particularly challenging. We present two new techniques, TeddySelection and CloudLasso, that support the selection of subsets in large particle 3D datasets in an interactive and visually intuitive manner. Specifically, we describe how to spatially select a subset of a 3D particle cloud by simply encircling the target particles on screen using either the mouse or direct-touch input. Based on the drawn lasso, our techniques automatically determine a bounding selection surface around the encircled particles based on their density. This kind of selection technique can be applied to particle datasets in several application domains. TeddySelection and CloudLasso reduce, and in some cases even eliminate, the need for complex multi-step selection processes involving Boolean operations. This was confirmed in a formal, controlled user study in which we compared the more flexible CloudLasso technique to the standard cylinder-based selection technique. This study showed that the former is consistently more efficient than the latter - in several cases the CloudLasso selection time was half that of the corresponding cylinder-based selection.

  2. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  3. Design and application of real-time visual attention model for the exploration of 3D virtual environments.

    PubMed

    Hillaire, Sébastien; Lécuyer, Anatole; Regia-Corte, Tony; Cozot, Rémi; Royan, Jérôme; Breton, Gaspard

    2012-03-01

    This paper studies the design and application of a novel visual attention model designed to compute user's gaze position automatically, i.e., without using a gaze-tracking system. The model we propose is specifically designed for real-time first-person exploration of 3D virtual environments. It is the first model adapted to this context which can compute in real time a continuous gaze point position instead of a set of 3D objects potentially observed by the user. To do so, contrary to previous models which use a mesh-based representation of visual objects, we introduce a representation based on surface-elements. Our model also simulates visual reflexes and the cognitive processes which take place in the brain such as the gaze behavior associated to first-person navigation in the virtual environment. Our visual attention model combines both bottom-up and top-down components to compute a continuous gaze point position on screen that hopefully matches the user's one. We conducted an experiment to study and compare the performance of our method with a state-of-the-art approach. Our results are found significantly better with sometimes more than 100 percent of accuracy gained. This suggests that computing a gaze point in a 3D virtual environment in real time is possible and is a valid approach, compared to object-based approaches. Finally, we expose different applications of our model when exploring virtual environments. We present different algorithms which can improve or adapt the visual feedback of virtual environments based on gaze information. We first propose a level-of-detail approach that heavily relies on multiple-texture sampling. We show that it is possible to use the gaze information of our visual attention model to increase visual quality where the user is looking, while maintaining a high-refresh rate. Second, we introduce the use of the visual attention model in three visual effects inspired by the human visual system namely: depth-of-field blur, camera

  4. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  5. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  6. Unlocking the scientific potential of complex 3D point cloud dataset : new classification and 3D comparison methods

    NASA Astrophysics Data System (ADS)

    Lague, D.; Brodu, N.; Leroux, J.

    2012-12-01

    Ground based lidar and photogrammetric techniques are increasingly used to track the evolution of natural surfaces in 3D at an unprecedented resolution and precision. The range of applications encompass many type of natural surfaces with different geometries and roughness characteristics (landslides, cliff erosion, river beds, bank erosion,....). Unravelling surface change in these contexts requires to compare large point clouds in 2D or 3D. The most commonly used method in geomorphology is based on a 2D difference of the gridded point clouds. Yet this is hardly adapted to many 3D natural environments such as rivers (with horizontal beds and vertical banks), while gridding complex rough surfaces is a complex task. On the other hand, tools allowing to perform 3D comparison are scarce and may require to mesh the point clouds which is difficult on rough natural surfaces. Moreover, existing 3D comparison tools do not provide an explicit calculation of confidence intervals that would factor in registration errors, roughness effects and instrument related position uncertainties. To unlock this problem, we developed the first algorithm combining a 3D measurement of surface change directly on point clouds with an estimate of spatially variable confidence intervals (called M3C2). The method has two steps : (1) surface normal estimation and orientation in 3D at a scale consistent with the local roughness ; (2) measurement of mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing 3D methods based on a closest-point calculation demonstrates the higher precision of the M3C2 method when mm changes needs to be detected. The M3C2 method is also simple to use as it does not require surface meshing or gridding, and is not sensitive to missing data or change in point density. We also present a 3D classification tool (CANUPO) for vegetation removal based on a new geometrical measure: the multi

  7. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    PubMed

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. PMID:27492601

  8. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    PubMed

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance.

  9. TractRender: a new generalized 3D medical image visualization and output platform

    NASA Astrophysics Data System (ADS)

    Hwang, Darryl H.; Tsao, Sinchai; Gajawelli, Niharika; Law, Meng; Lepore, Natasha

    2015-01-01

    Diffusion MRI allows us not only voxelized diffusion characteristics but also the potential to delineate neuronal fiber path through tractography. There is a dearth of flexible open source tractography software programs for visualizing these complicated 3D structures. Moreover, rendering these structures using various shading, lighting, and representations will result in vastly different graphical feel. In addition, the ability to output these objects in various formats increases the utility of this platform. We have created TractRender that leverages openGL features through Matlab, allowing for maximum ease of use but still maintain the flexibility of custom scene rendering.

  10. Arena3D: visualizing time-driven phenotypic differences in biological systems

    PubMed Central

    2012-01-01

    Background Elucidating the genotype-phenotype connection is one of the big challenges of modern molecular biology. To fully understand this connection, it is necessary to consider the underlying networks and the time factor. In this context of data deluge and heterogeneous information, visualization plays an essential role in interpreting complex and dynamic topologies. Thus, software that is able to bring the network, phenotypic and temporal information together is needed. Arena3D has been previously introduced as a tool that facilitates link discovery between processes. It uses a layered display to separate different levels of information while emphasizing the connections between them. We present novel developments of the tool for the visualization and analysis of dynamic genotype-phenotype landscapes. Results Version 2.0 introduces novel features that allow handling time course data in a phenotypic context. Gene expression levels or other measures can be loaded and visualized at different time points and phenotypic comparison is facilitated through clustering and correlation display or highlighting of impacting changes through time. Similarity scoring allows the identification of global patterns in dynamic heterogeneous data. In this paper we demonstrate the utility of the tool on two distinct biological problems of different scales. First, we analyze a medium scale dataset that looks at perturbation effects of the pluripotency regulator Nanog in murine embryonic stem cells. Dynamic cluster analysis suggests alternative indirect links between Nanog and other proteins in the core stem cell network. Moreover, recurrent correlations from the epigenetic to the translational level are identified. Second, we investigate a large scale dataset consisting of genome-wide knockdown screens for human genes essential in the mitotic process. Here, a potential new role for the gene lsm14a in cytokinesis is suggested. We also show how phenotypic patterning allows for extensive

  11. Application of Lidar Data and 3D-City Models in Visual Impact Simulations of Tall Buildings

    NASA Astrophysics Data System (ADS)

    Czynska, K.

    2015-04-01

    The paper examines possibilities and limitations of application of Lidar data and digital 3D-city models to provide specialist urban analyses of tall buildings. The location and height of tall buildings is a subject of discussions, conflicts and controversies in many cities. The most important aspect is the visual influence of tall buildings to the city landscape, significant panoramas and other strategic city views. It is an actual issue in contemporary town planning worldwide. Over 50% of high-rise buildings on Earth were built in last 15 years. Tall buildings may be a threat especially for historically developed cities - typical for Europe. Contemporary Earth observation, more and more available Lidar scanning and 3D city models are a new tool for more accurate urban analysis of the tall buildings impact. The article presents appropriate simulation techniques, general assumption of geometric and computational algorithms - available methodologies and individual methods develop by author. The goal is to develop the geometric computation methods for GIS representation of the visual impact of a selected tall building to the structure of large city. In reference to this, the article introduce a Visual Impact Size method (VIS). Presented analyses were developed by application of airborne Lidar / DSM model and more processed models (like CityGML), containing the geometry and it's semantics. Included simulations were carried out on an example of the agglomeration of Berlin.

  12. Fast 3D visualization of endogenous brain signals with high-sensitivity laser scanning photothermal microscopy

    PubMed Central

    Miyazaki, Jun; Iida, Tadatsune; Tanaka, Shinji; Hayashi-Takagi, Akiko; Kasai, Haruo; Okabe, Shigeo; Kobayashi, Takayoshi

    2016-01-01

    A fast, high-sensitivity photothermal microscope was developed by implementing a spatially segmented balanced detection scheme into a laser scanning microscope. We confirmed a 4.9 times improvement in signal-to-noise ratio in the spatially segmented balanced detection compared with that of conventional detection. The system demonstrated simultaneous bi-modal photothermal and confocal fluorescence imaging of transgenic mouse brain tissue with a pixel dwell time of 20 μs. The fluorescence image visualized neurons expressing yellow fluorescence proteins, while the photothermal signal detected endogenous chromophores in the mouse brain, allowing 3D visualization of the distribution of various features such as blood cells and fine structures probably due to lipids. This imaging modality was constructed using compact and cost-effective laser diodes, and will thus be widely useful in the life and medical sciences. PMID:27231615

  13. Impacts of a CAREER Award on Advancing 3D Visualization in Geology Education

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2011-12-01

    CAREER awards provide a unique opportunity to develop educational activities as an integrated part of one's research activities. This CAREER award focused on developing interactive 3D visualization tools to aid geology students in improving their 3D visualization skills. Not only is this a key skill for field geologists who need to visualize unseen subsurface structures, but it is also an important aspect of geodynamic research into the processes, such as faulting and viscous flow, that occur during subduction. Working with an undergraduate student researcher and using the KeckCAVES developed volume visualization code 3DVisualizer, we have developed interactive visualization laboratory exercises (e.g., Discovering the Rule of Vs) and a suite of mini-exercises using illustrative 3D geologic structures (e.g., syncline, thrust fault) that students can explore (e.g., rotate, slice, cut-away) to understand how exposure of these structures at the surface can provide insight into the subsurface structure. These exercises have been integrated into the structural geology curriculum and made available on the web through the KeckCAVES Education website as both data-and-code downloads and pre-made movies. One of the main challenges of implementing research and education activities through the award is that progress must be made on both throughout the award period. Therefore, while our original intent was to use subduction model output as the structures in the educational models, delays in the research results required that we develop these models using other simpler input data sets. These delays occurred because one of the other goals of the CAREER grant is to allow the faculty to take their research in a new direction, which may certainly lead to transformative science, but can also lead to more false-starts as the challenges of doing the new science are overcome. However, having created the infrastructure for the educational components, use of the model results in future

  14. A method for the evaluation of thousands of automated 3D stem cell segmentations.

    PubMed

    Bajcsy, P; Simon, M; Florczyk, S J; Simon, C G; Juba, D; Brady, M C

    2015-12-01

    There is no segmentation method that performs perfectly with any dataset in comparison to human segmentation. Evaluation procedures for segmentation algorithms become critical for their selection. The problems associated with segmentation performance evaluations and visual verification of segmentation results are exaggerated when dealing with thousands of three-dimensional (3D) image volumes because of the amount of computation and manual inputs needed. We address the problem of evaluating 3D segmentation performance when segmentation is applied to thousands of confocal microscopy images (z-stacks). Our approach is to incorporate experimental imaging and geometrical criteria, and map them into computationally efficient segmentation algorithms that can be applied to a very large number of z-stacks. This is an alternative approach to considering existing segmentation methods and evaluating most state-of-the-art algorithms. We designed a methodology for 3D segmentation performance characterization that consists of design, evaluation and verification steps. The characterization integrates manual inputs from projected surrogate 'ground truth' of statistically representative samples and from visual inspection into the evaluation. The novelty of the methodology lies in (1) designing candidate segmentation algorithms by mapping imaging and geometrical criteria into algorithmic steps, and constructing plausible segmentation algorithms with respect to the order of algorithmic steps and their parameters, (2) evaluating segmentation accuracy using samples drawn from probability distribution estimates of candidate segmentations and (3) minimizing human labour needed to create surrogate 'truth' by approximating z-stack segmentations with 2D contours from three orthogonal z-stack projections and by developing visual verification tools. We demonstrate the methodology by applying it to a dataset of 1253 mesenchymal stem cells. The cells reside on 10 different types of biomaterial

  15. In vivo 3D visualization of peripheral circulatory system using linear optoacoustic array

    NASA Astrophysics Data System (ADS)

    Ermilov, Sergey A.; Brecht, Hans-Peter; Fronheiser, Matthew P.; Nadvoretsky, Vyacheslav; Su, Richard; Conjusteau, Andre; Oraevsky, Alexander A.

    2010-02-01

    In this work we modified light illumination of the laser optoacoustic (OA) imaging system to improve the 3D visualization of human forearm vasculature. The computer modeling demonstrated that the new illumination design that features laser beams converging on the surface of the skin in the imaging plane of the probe provides superior OA images in comparison to the images generated by the illumination with parallel laser beams. We also developed the procedure for vein/artery differentiation based on OA imaging with 690 nm and 1080 nm laser wavelengths. The procedure includes statistical analysis of the intensities of OA images of the neighboring blood vessels. Analysis of the OA images generated by computer simulation of a human forearm illuminated at 690 nm and 1080 nm resulted in successful differentiation of veins and arteries. In vivo scanning of a human forearm provided high contrast 3D OA image of a forearm skin and a superficial blood vessel. The blood vessel image contrast was further enhanced after it was automatically traced using the developed software. The software also allowed evaluation of the effective blood vessel diameter at each step of the scan. We propose that the developed 3D OA imaging system can be used during preoperative mapping of forearm vessels that is essential for hemodialysis treatment.

  16. Standardization based on human factors for 3D display: performance characteristics and measurement methods

    NASA Astrophysics Data System (ADS)

    Uehara, Shin-ichi; Ujike, Hiroyasu; Hamagishi, Goro; Taira, Kazuki; Koike, Takafumi; Kato, Chiaki; Nomura, Toshio; Horikoshi, Tsutomu; Mashitani, Ken; Yuuki, Akimasa; Izumi, Kuniaki; Hisatake, Yuzo; Watanabe, Naoko; Umezu, Naoaki; Nakano, Yoshihiko

    2010-02-01

    We are engaged in international standardization activities for 3D displays. We consider that for a sound development of 3D displays' market, the standards should be based on not only mechanism of 3D displays, but also human factors for stereopsis. However, we think that there is no common understanding on what the 3D display should be and that the situation makes developing the standards difficult. In this paper, to understand the mechanism and human factors, we focus on a double image, which occurs in some conditions on an autostereoscopic display. Although the double image is generally considered as an unwanted effect, we consider that whether the double image is unwanted or not depends on the situation and that there are some allowable double images. We tried to classify the double images into the unwanted and the allowable in terms of the display mechanism and visual ergonomics for stereopsis. The issues associated with the double image are closely related to performance characteristics for the autostereoscopic display. We also propose performance characteristics, measurement and analysis methods to represent interocular crosstalk and motion parallax.

  17. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/.

  18. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  19. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  20. McIDAS-V: Advanced Visualization for 3D Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2010-12-01

    McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.

  1. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.

  2. 3D histogram visualization in different color spaces with application in color clustering classification

    NASA Astrophysics Data System (ADS)

    Marcu, Gabriel G.; Abe, Satoshi

    1995-04-01

    The paper presents a dynamically visualization procedure for 3D histogram of color images. The procedure runs for RGB, YMC, HSV, HSL device dependent color spaces and for Lab, Luv device independent color spaces and it is easily extendable to other color spaces if the analytical form of color transformations is available. Each histogram value is represented in the color space as a colored ball, in a position corresponding to the place of color in the color space. The paper presents the procedures for nonlinear ball normalization, ordering of drawing, space edges drawing, translation, scaling and rotation of the histogram. The 3D histogram visualization procedure can be used in different applications described in the second part of the paper. It enables to get a clear representation of the range of colors of one image, to derive and compare the efficiency of different clusterization procedures for color classification, to display comparatively the gamut of different color devices, to select the color space for an optimal mapping procedure of the outside gamut colors for minimizing the hue error, to detect bad-alignment in RGB planes for a sequential process.

  3. Visualizing Earthquakes in '3D' using the IRIS Earthquake Browser (IEB) Website

    NASA Astrophysics Data System (ADS)

    Welti, R.; McQuillan, P. J.; Weertman, B. R.

    2012-12-01

    The distribution of earthquakes is often easier to interpret in 3D, but most 3D visualization tools require the installation of specialized software and some practice in their use. To reduce this barrier for students and the general public, a pseudo-3D seismicity viewer has been developed which runs in a web browser as part of the IRIS Earthquake Browser (IEB). IEB is an interactive map for viewing earthquake epicenters all over the world, and is composed of a Google map, HTML, JavaScript and a fast earthquake hypocenter web service. The web service accesses seismic data at IRIS from the early 1960s until present. Users can change the region, the number of events, and the depth and magnitude ranges to display. Earthquakes may also be viewed as a table, or exported to various formats. Predefined regions can be selected and zoomed to, and bookmarks generally preserve whatever region and settings are in effect when bookmarked, allowing the easy sharing of particular "scenarios" with other users. Plate boundaries can be added to the display. The 3DV viewer displays events for the currently-selected IEB region in a separate window. They can be rotated and zoomed, with a fast response for plots of up to several thousand events. Rotation can be done manually by dragging or automatically at a set rate, and tectonic plate boundaries turned on or off. 3DV uses a geographical projection algorithm provided by Gary Pavils and collaborators. It is written in HTML5, and is based on CanvasMol by Branislav Ulicny.; A region SE of Fiji, selected in IRIS Earthquake Browser. ; The same region as viewed in 3D Viewer.

  4. Validation of computational fluid dynamics methods with anatomically exact, 3D printed MRI phantoms and 4D pcMRI.

    PubMed

    Anderson, Jeff R; Diaz, Orlando; Klucznik, Richard; Zhang, Y Jonathan; Britz, Gavin W; Grossman, Robert G; Lv, Nan; Huang, Qinghai; Karmonik, Christof

    2014-01-01

    A new concept of rapid 3D prototyping was implemented using cost-effective 3D printing for creating anatomically correct replica of cerebral aneurysms. With a dedicated flow loop set-up in a full body human MRI scanner, flow measurements were performed using 4D phase contrast magnetic resonance imaging to visualize and quantify intra-aneurysmal flow patterns. Ultrashort TE sequences were employed to obtain high-resolution 3D image data to visualize the lumen inside the plastic replica. In-vitro results were compared with retrospectively obtained in-vivo data and results from computational fluid dynamics simulations (CFD). Rapid prototyping of anatomically realistic 3D models may have future impact in treatment planning, design of image acquisition methods for MRI and angiographic systems and for the design and testing of advanced image post-processing technologies.

  5. Object-oriented urban 3D spatial data model organization method

    NASA Astrophysics Data System (ADS)

    Li, Jing-wen; Li, Wen-qing; Lv, Nan; Su, Tao

    2015-12-01

    This paper combined the 3d data model with object-oriented organization method, put forward the model of 3d data based on object-oriented method, implemented the city 3d model to quickly build logical semantic expression and model, solved the city 3d spatial information representation problem of the same location with multiple property and the same property with multiple locations, designed the space object structure of point, line, polygon, body for city of 3d spatial database, and provided a new thought and method for the city 3d GIS model and organization management.

  6. Enhanced Rgb-D Mapping Method for Detailed 3d Modeling of Large Indoor Environments

    NASA Astrophysics Data System (ADS)

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-06-01

    RGB-D sensors are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks with respect to 3D dense mapping of indoor environments. First, they only allow a measurement range with a limited distance (e.g., within 3 m) and a limited field of view. Second, the error of the depth measurement increases with increasing distance to the sensor. In this paper, we propose an enhanced RGB-D mapping method for detailed 3D modeling of large indoor environments by combining RGB image-based modeling and depth-based modeling. The scale ambiguity problem during the pose estimation with RGB image sequences can be resolved by integrating the information from the depth and visual information provided by the proposed system. A robust rigid-transformation recovery method is developed to register the RGB image-based and depth-based 3D models together. The proposed method is examined with two datasets collected in indoor environments for which the experimental results demonstrate the feasibility and robustness of the proposed method

  7. Remote web-based 3D visualization of hydrological forecasting datasets.

    NASA Astrophysics Data System (ADS)

    van Meersbergen, Maarten; Drost, Niels; Blower, Jon; Griffiths, Guy; Hut, Rolf; van de Giesen, Nick

    2015-04-01

    As the possibilities for larger and more detailed simulations of geoscientific data expand, the need for smart solutions in data visualization grow as well. Large volumes of data should be quickly accessible from anywhere in the world without the need for transferring the simulation results. We aim to provide tools for both processing and the handling of these large datasets. As an example, the eWaterCycle project (www.ewatercycle.org) aims to provide a running 14-day ensemble forecast to predict water related stress around the globe. The large volumes of simulation results with uncertainty data that are generated through ensemble hydrological predictions provide a challenge for existing visualization solutions. One possible solution for this challenge lies in the use of web-enabled technology for visualization and analysis of these datasets. Web-based visualization provides an additional benefit in that it eliminates the need for any software installation and configuration and allows for the easy communication of research results between collaborating research parties. Providing interactive tools for the exploration of these datasets will not only help in the analysis of the data by researchers, it can also aid in the dissemination of the research results to the general public. In Vienna, we will present a working open source solution for remote visualization of large volumes of global geospatial data based on the proven open-source 3D web visualization software package Cesium (cesiumjs.org), the ncWMS software package provided by the Reading e-Science Centre and the WebGL and NetCDF standards.

  8. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  9. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    ERIC Educational Resources Information Center

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  10. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  11. Parallel 3-D method of characteristics in MPACT

    SciTech Connect

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-07-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k{sub eff} differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  12. An automated 3D reconstruction method of UAV images

    NASA Astrophysics Data System (ADS)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  13. Noninvasive CT to Iso-C3D registration for improved intraoperative visualization in computer assisted orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Rudolph, Tobias; Ebert, Lars; Kowal, Jens

    2006-03-01

    Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.

  14. GIS-based 3D visualization of the Mw 7.7, 2007, Tocopilla aftershocks

    NASA Astrophysics Data System (ADS)

    Eggert, S.; Sobiesiak, M.; Altenbrunn, K.

    2009-12-01

    The November 14, 2007 Mw 7.7 earthquake nucleated on the west coast of northern Chile about 40 km east of the city of Tocopilla. It took place in the southern part of a large seismic gap, the Iquique subduction zone segment which is supposed to be at the end of its seismic cycle. The Tocopilla fault plane appears to be the northern continuation of the Mw 8.0, 1995 Antofagasta earthquake. We present a complex 3D model of the rupture area including first hypocenter localizations of aftershocks following the event. The data was recorded during a mission of the German Task Force for Earthquakes after the 2007 Tocopilla earthquake. The seismic stations were recording the aftershocks from November 2007 until May 2008. In general, subduction zones have a complex structure where most of the volumes examined are characterized by strong variations in physical and material parameters. Therefore, 3D representation of the geophysical and geological conditions to be found are of great importance to understand such a subduction environment. We start with a two-dimensional visualization of the geological and geophysical setting. In a second step, we use GIS as a three-dimensional modeling tool which gives us the possibility to visualize the complex geophysical processes. One can easily add and delete data and focus on the information one needs. This allows us to investigate the aftershock distribution along the subducting slab and identify clear structures and clusters within the data set. Furthermore we combine the 2007 Tocopilla data set with the 1995 Antofagasta aftershocks which provides a new, three-dimensional insight into the segment boundary of these two events. Analyzing the aftershock sequence with a GIS-based model will not only help to visualize the setting but also be the base for various calculations and further explorations of the complex structures. Aftershocks following the 1995 Antofagasta earthquake and the 2007 Tocopilla earthquake

  15. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  16. 3D Visualization of Hydrological Model Outputs For a Better Understanding of Multi-Scale Phenomena

    NASA Astrophysics Data System (ADS)

    Richard, J.; Schertzer, D. J. M.; Tchiguirinskaia, I.

    2014-12-01

    During the last decades, many hydrological models has been created to simulate extreme events or scenarios on catchments. The classical outputs of these models are 2D maps, time series or graphs, which are easily understood by scientists, but not so much by many stakeholders, e.g. mayors or local authorities, and the general public. One goal of the Blue Green Dream project is to create outputs that are adequate for them. To reach this goal, we decided to convert most of the model outputs into a unique 3D visualization interface that combines all of them. This conversion has to be performed with an hydrological thinking to keep the information consistent with the context and the raw outputs.We focus our work on the conversion of the outputs of the Multi-Hydro (MH) model, which is physically based, fully distributed and with a GIS data interface. MH splits the urban water cycle into 4 components: the rainfall, the surface runoff, the infiltration and the drainage. To each of them, corresponds a modeling module with specific inputs and outputs. The superimposition of all this information will highlight the model outputs and help to verify the quality of the raw input data. For example, the spatial and the time variability of the rain generated by the rainfall module will be directly visible in 4D (3D + time) before running a full simulation. It is the same with the runoff module: because the result quality depends of the resolution of the rasterized land use, it will confirm or not the choice of the cell size.As most of the inputs and outputs are GIS files, two main conversions will be applied to display the results into 3D. First, a conversion from vector files to 3D objects. For example, buildings are defined in 2D inside a GIS vector file. Each polygon can be extruded with an height to create volumes. The principle is the same for the roads but an intrusion, instead of an extrusion, is done inside the topography file. The second main conversion is the raster

  17. Using 3D visual tools with LiDAR for environmental outreach

    NASA Astrophysics Data System (ADS)

    Glenn, N. F.; Mannel, S.; Ehinger, S.; Moore, C.

    2009-12-01

    The project objective is to develop visualizations using light detection and ranging (LiDAR) data and other data sources to increase community understanding of remote sensing data for earth science. These data are visualized using Google Earth and other visualization methods. Final products are delivered to K-12, state, and federal agencies to share with their students and community constituents. Once our partner agencies were identified, we utilized a survey method to better understand their technological abilities and use of visualization products. The final multimedia products include a visualization of LiDAR and well data for water quality mapping in a southeastern Idaho watershed; a tour of hydrologic points of interest in southeastern Idaho visited by thousands of people each year, and post-earthquake features near Borah Peak, Idaho. In addition to the customized multimedia materials, we developed tutorials to encourage our partners to utilize these tools with their own LiDAR and other scientific data.

  18. The RNA 3D Motif Atlas: Computational methods for extraction, organization and evaluation of RNA motifs.

    PubMed

    Parlea, Lorena G; Sweeney, Blake A; Hosseini-Asanjan, Maryam; Zirbel, Craig L; Leontis, Neocles B

    2016-07-01

    RNA 3D motifs occupy places in structured RNA molecules that correspond to the hairpin, internal and multi-helix junction "loops" of their secondary structure representations. As many as 40% of the nucleotides of an RNA molecule can belong to these structural elements, which are distinct from the regular double helical regions formed by contiguous AU, GC, and GU Watson-Crick basepairs. With the large number of atomic- or near atomic-resolution 3D structures appearing in a steady stream in the PDB/NDB structure databases, the automated identification, extraction, comparison, clustering and visualization of these structural elements presents an opportunity to enhance RNA science. Three broad applications are: (1) identification of modular, autonomous structural units for RNA nanotechnology, nanobiology and synthetic biology applications; (2) bioinformatic analysis to improve RNA 3D structure prediction from sequence; and (3) creation of searchable databases for exploring the binding specificities, structural flexibility, and dynamics of these RNA elements. In this contribution, we review methods developed for computational extraction of hairpin and internal loop motifs from a non-redundant set of high-quality RNA 3D structures. We provide a statistical summary of the extracted hairpin and internal loop motifs in the most recent version of the RNA 3D Motif Atlas. We also explore the reliability and accuracy of the extraction process by examining its performance in clustering recurrent motifs from homologous ribosomal RNA (rRNA) structures. We conclude with a summary of remaining challenges, especially with regard to extraction of multi-helix junction motifs. PMID:27125735

  19. A MATLAB function for 3-D and 4-D topographical visualization in geosciences

    NASA Astrophysics Data System (ADS)

    Zekollari, Harry

    2016-04-01

    Combining topographical information and spatially varying variables in visualizations is often crucial and inherent to geoscientific problems. Despite this, it is often an impossible or a very time-consuming and difficult task to create such figures by using classic software packages. This is also the case in the widely used numerical computing environment MATLAB. Here a MATLAB function is introduced for plotting a variety of natural environments with a pronounced topography, such as for instance glaciers, volcanoes and lakes in mountainous regions. Landscapes can be visualized in 3-D, with a single colour defining a featured surface type (e.g. ice, snow, water, lava), or with a colour scale defining the magnitude of a variable (e.g. ice thickness, snow depth, water depth, surface velocity, gradient, elevation). As an input only the elevation of the subsurface (typically the bedrock) and the surface are needed, which can be complemented by various input parameters in order to adapt the figure to specific needs. The figures are particularly suited to make time-evolving animations of natural processes, such as for instance a glacier retreat or a lake drainage event. Several visualization examples will be provided alongside with animations. The function, which is freely available for download, only requires the basic package of MATLAB and can be run on any standard stationary or portable personal computer.

  20. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  1. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-01-01

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios. PMID:23986539

  2. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  3. ROI-preserving 3D video compression method utilizing depth information

    NASA Astrophysics Data System (ADS)

    Ti, Chunli; Xu, Guodong; Guan, Yudong; Teng, Yidan

    2015-09-01

    Efficiently transmitting the extra information of three dimensional (3D) video is becoming a key issue of the development of 3DTV. 2D plus depth format not only occupies the smaller bandwidth and is compatible transmission under the condition of the existing channel, but also can provide technique support for advanced 3D video compression in some extend. This paper proposes an ROI-preserving compression scheme to further improve the visual quality at a limited bit rate. According to the connection between the focus of Human Visual System (HVS) and depth information, region of interest (ROI) can be automatically selected via depth map progressing. The main improvement from common method is that a meanshift based segmentation is executed to the depth map before foreground ROI selection to keep the integrity of scene. Besides, the sensitive areas along the edges are also protected. The Spatio-temporal filtering adapting to H.264 is used to the non-ROI of both 2D video and depth map before compression. Experiments indicate that, the ROI extracted by this method is more undamaged and according with subjective feeling, and the proposed method can keep the key high-frequency information more effectively while the bit rate is reduced.

  4. The 3D Visualization of Slope Terrain in Sun Moon Lake.

    NASA Astrophysics Data System (ADS)

    Deng, F.; Gwo-shyn, S.; Pei-Kun, L.

    2015-12-01

    side-slope using the multi-beam sounder below the water surface. Finally, the image of the side-scan sonar is taken and merges with contour lines produced from underwater topographic DTM data. Combining those data, our purpose is by creating different 3D images to have good visualization checking the data of side-slope DTM surveys if they are in well qualified controlled.

  5. Full 3-D transverse oscillations: a method for tissue motion estimation.

    PubMed

    Salles, Sebastien; Liebgott, Hervé; Garcia, Damien; Vray, Didier

    2015-08-01

    We present a new method to estimate 4-D (3-D + time) tissue motion. The method used combines 3-D phase based motion estimation with an unconventional beamforming strategy. The beamforming technique allows us to obtain full 3-D RF volumes with axial, lateral, and elevation modulations. Based on these images, we propose a method to estimate 3-D motion that uses phase images instead of amplitude images. First, volumes featuring 3-D oscillations are created using only a single apodization function, and the 3-D displacement between two consecutive volumes is estimated simultaneously by applying this 3-D estimation. The validity of the method is investigated by conducting simulations and phantom experiments. The results are compared with those obtained with two other conventional estimation methods: block matching and optical flow. The results show that the proposed method outperforms the conventional methods, especially in the transverse directions.

  6. A method of multi-view intraoral 3D measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Lv, Peijun; Sun, Yunchun

    2015-02-01

    In dental restoration, its important to achieve a high-accuracy digital impression. Most of the existing intraoral measurement systems can only measure the tooth from a single view. Therfore - if we are wilng to acquire the whole data of a tooth, the scans of the tooth from multi-direction ad the data stitching based on the features of the surface are needed, which increases the measurement duration and influence the measurement accuracy. In this paper, we introduce a fringe-projection based on multi-view intraoral measurement system. It can acquire 3D data of the occlusal surface, the buccal surface and the lingual surface of a tooth synchronously, by using a senor with three mirrors, which aim at the three surfaces respectively and thus expand the measuring area. The constant relationship of the three mirrors is calibrated before measurement and can help stitch the data clouds acquired through different mirrors accurately. Therefore the system can obtain the 3D data of a tooth without the need to measure it from different directions for many times. Experiments proved the availability and reliability of this miniaturized measurement system.

  7. CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    PubMed Central

    2012-01-01

    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis. PMID:22424447

  8. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool.

    PubMed

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2009-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  9. Using Computer-Aided Design Software and 3D Printers to Improve Spatial Visualization

    ERIC Educational Resources Information Center

    Katsio-Loudis, Petros; Jones, Millie

    2015-01-01

    Many articles have been published on the use of 3D printing technology. From prefabricated homes and outdoor structures to human organs, 3D printing technology has found a niche in many fields, but especially education. With the introduction of AutoCAD technical drawing programs and now 3D printing, learners can use 3D printed models to develop…

  10. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  11. Effect of space balance 3D training using visual feedback on balance and mobility in acute stroke patients

    PubMed Central

    Ko, YoungJun; Ha, HyunGeun; Bae, Young-Hyeon; Lee, WanHee

    2015-01-01

    [Purpose] The purpose of the study was to determine the effects of balance training with Space Balance 3D, which is a computerized measurement and visual feedback balance assessment system, on balance and mobility in acute stroke patients. [Subjects and Methods] This was a randomized controlled trial in which 52 subjects were assigned randomly into either an experimental group or a control group. The experimental group, which contained 26 subjects, received balance training with a Space Balance 3D exercise program and conventional physical therapy interventions 5 times per week during 3 weeks. Outcome measures were examined before and after the 3-week interventions using the Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Postural Assessment Scale for Stroke Patients (PASS). The data were analyzed by a two-way repeated measures ANOVA using SPSS 19.0. [Results] The results revealed a nonsignificant interaction effect between group and time period for both groups before and after the interventions in the BBS score, TUG score, and PASS score. In addition, the experimental group showed more improvement than the control group in the BBS, TUG and PASS scores, but the differences were not significant. In the comparisons within the groups by time, both groups showed significant improvement in BBS, TUG, and PASS scores. [Conclusion] The Space Balance 3D training with conventional physical therapy intervention is recommended for improvement of balance and mobility in acute stroke patients. PMID:26157270

  12. Effect of space balance 3D training using visual feedback on balance and mobility in acute stroke patients.

    PubMed

    Ko, YoungJun; Ha, HyunGeun; Bae, Young-Hyeon; Lee, WanHee

    2015-05-01

    [Purpose] The purpose of the study was to determine the effects of balance training with Space Balance 3D, which is a computerized measurement and visual feedback balance assessment system, on balance and mobility in acute stroke patients. [Subjects and Methods] This was a randomized controlled trial in which 52 subjects were assigned randomly into either an experimental group or a control group. The experimental group, which contained 26 subjects, received balance training with a Space Balance 3D exercise program and conventional physical therapy interventions 5 times per week during 3 weeks. Outcome measures were examined before and after the 3-week interventions using the Berg Balance Scale (BBS), Timed Up and Go (TUG) test, and Postural Assessment Scale for Stroke Patients (PASS). The data were analyzed by a two-way repeated measures ANOVA using SPSS 19.0. [Results] The results revealed a nonsignificant interaction effect between group and time period for both groups before and after the interventions in the BBS score, TUG score, and PASS score. In addition, the experimental group showed more improvement than the control group in the BBS, TUG and PASS scores, but the differences were not significant. In the comparisons within the groups by time, both groups showed significant improvement in BBS, TUG, and PASS scores. [Conclusion] The Space Balance 3D training with conventional physical therapy intervention is recommended for improvement of balance and mobility in acute stroke patients.

  13. 3D visualization of the lumbar facet joint after degeneration using propagation phase contrast micro-tomography

    PubMed Central

    Cao, Yong; Zhang, Yi; Yin, Xianzheng; Lu, Hongbin; Hu, Jianzhong; Duan, Chunyue

    2016-01-01

    Lumbar facet joint (LFJ) degeneration is believed to be an important cause of low back pain (LBP). Identifying the morphological changes of the LFJ in the degeneration process at a high-resolution level could be meaningful for our better understanding of the possible mechanisms underlying this process. In the present study, we determined the 3D morphology of the LFJ using propagation phase contrast micro-tomography (PPCT) in rats to assess the subtle changes that occur during the degeneration process. PPCT provides vivid 3D images of micromorphological changes in the LFJ during its degeneration process, and the changes in the subchondral bone occurred earlier than in the cartilage during the early stage of degeneration of the LFJ. The delineation of this alteration was similar to that with the histological method. Our findings demonstrated that PPCT could serve as a valuable tool for 3D visualization of the morphology of the LFJ by providing comprehensive information about the cartilage and the underlying subchondral bone and their changes during degeneration processes. It might also have great potential for providing effective diagnostic tools to track changes in the cartilage and to evaluate the effects of therapeutic interventions for LFJ degeneration in preclinical studies. PMID:26907889

  14. Earth Science Research Discovery, Integration, 3D Visualization and Analysis using NASA World Wind

    NASA Astrophysics Data System (ADS)

    Alameh, N.; Hogan, P.

    2008-12-01

    NASA plays a leadership role in the world of Advanced Information Technologies. Part of our mission is to leverage those technologies to increase the usability of the growing amount of earth observation produced by the science community. NASA World Wind open source technology provides a complete 3D visualization platform that is being continually advanced by NASA, its partners and the open source community. The technology makes scientific data and observations more accessible to Earth scientists and offers them a standards-based extensible platform to manipulate and analyze that data. The API-centric architecture of World Wind's SDK allows others to readily extend or embed this technology (including in web pages). Such multiple approaches to using the technology accelerate opportunities for the research community to provide "advances in fundamental understanding of the Earth system and increased application of this understanding to serve the nation and the people of the world" (NRC Decadal Survey). The opportunities to advance this NASA Open Source Agreement (NOSA) technology by leveraging advances in web services, interoperability, data discovery mechanisms, and Sensor Web are unencumbered by proprietary constraints and therefore provide the basis for an evolving platform that can reliably service the needs of the Earth Science, Sensor Web and GEOSS communities. The ability for these communities to not only use this technology in an unrestricted manner but to also participate in advancing it leads to accelerated innovation and maximum exchange of information. 3 characteristics enable World Wind to push the frontier in Advanced Information Systems: 1- World Wind provides a unifying information browser to enable a variety of 3D geospatial applications. World Wind consists of a coherent suite of modular components to be used selectively or in concert with any number of programs. 2- World Wind technology can be embedded as part of any application and hence makes it

  15. Interactive 3D Visualization: An Important Element in Dealing with Increasing Data Volumes and Decreasing Resources

    NASA Astrophysics Data System (ADS)

    Gee, L.; Reed, B.; Mayer, L.

    2002-12-01

    Recent years have seen remarkable advances in sonar technology, positioning capabilities, and computer processing power that have revolutionized the way we image the seafloor. The US Naval Oceanographic Office (NAVOCEANO) has updated its survey vessels and launches to the latest generation of technology and now possesses a tremendous ocean observing and mapping capability. However, the systems produce massive amounts of data that must be validated prior to inclusion in various bathymetry, hydrography, and imagery products. The key to meeting the challenge of the massive data volumes was to change the approach that required every data point be viewed. This was achieved with the replacement of the traditional line-by-line editing approach with an automated cleaning module, and an area-based editor. The approach includes a unique data structure that enables the direct access to the full resolution data from the area based view, including a direct interface to target files and imagery snippets from mosaic and full resolution imagery. The increased data volumes to be processed also offered tremendous opportunities in terms of visualization and analysis, and interactive 3D presentation of the complex multi-attribute data provided a natural complement to the area based processing. If properly geo-referenced and treated, the complex data sets can be presented in a natural and intuitive manner that allows the integration of multiple components each at their inherent level of resolution and without compromising the quantitative nature of the data. Artificial sun-illumination, shading, and 3-D rendering are used with digital bathymetric data to form natural looking and easily interpretable, yet quantitative, landscapes that allow the user to rapidly identify the data requiring further processing or analysis. Color can be used to represent depth or other parameters (like backscatter, quality factors or sediment properties), which can be draped over the DTM, or high resolution

  16. MOM3D method of moments code theory manual

    NASA Technical Reports Server (NTRS)

    Shaeffer, John F.

    1992-01-01

    MOM3D is a FORTRAN algorithm that solves Maxwell's equations as expressed via the electric field integral equation for the electromagnetic response of open or closed three dimensional surfaces modeled with triangle patches. Two joined triangles (couples) form the vector current unknowns for the surface. Boundary conditions are for perfectly conducting or resistive surfaces. The impedance matrix represents the fundamental electromagnetic interaction of the body with itself. A variety of electromagnetic analysis options are possible once the impedance matrix is computed including backscatter radar cross section (RCS), bistatic RCS, antenna pattern prediction for user specified body voltage excitation ports, RCS image projection showing RCS scattering center locations, surface currents excited on the body as induced by specified plane wave excitation, and near field computation for the electric field on or near the body.

  17. 3D sensitivity of 6-electrode Focused Impedance Method (FIM)

    NASA Astrophysics Data System (ADS)

    Masum Iquebal, A. H.; Siddique-e Rabbani, K.

    2010-04-01

    The present work was taken up to have an understanding of the depth sensitivity of the 6 electrode FIM developed by our laboratory earlier, so that it may be applied judiciously for the measurement of organs in 3D, with electrodes on the skin surface. For a fixed electrode geometry sensitivity is expected to depend on the depth, size and conductivity of the target object. With current electrodes 18 cm apart and potential electrodes 5 cm apart, depth sensitivity of spherical conductors, insulators and of pieces of potato of different diameters were measured. The sensitivity dropped sharply with depth gradually leveling off to background, and objects could be sensed down to a depth of about twice their diameters. The sensitivity at a certain depth increases almost linearly with volume for objects with the same conductivity. Thus these results increase confidence in the use of FIM for studying organs at depths of the body.

  18. 3D and Education

    NASA Astrophysics Data System (ADS)

    Meulien Ohlmann, Odile

    2013-02-01

    Today the industry offers a chain of 3D products. Learning to "read" and to "create in 3D" becomes an issue of education of primary importance. 25 years professional experience in France, the United States and Germany, Odile Meulien set up a personal method of initiation to 3D creation that entails the spatial/temporal experience of the holographic visual. She will present some different tools and techniques used for this learning, their advantages and disadvantages, programs and issues of educational policies, constraints and expectations related to the development of new techniques for 3D imaging. Although the creation of display holograms is very much reduced compared to the creation of the 90ies, the holographic concept is spreading in all scientific, social, and artistic activities of our present time. She will also raise many questions: What means 3D? Is it communication? Is it perception? How the seeing and none seeing is interferes? What else has to be taken in consideration to communicate in 3D? How to handle the non visible relations of moving objects with subjects? Does this transform our model of exchange with others? What kind of interaction this has with our everyday life? Then come more practical questions: How to learn creating 3D visualization, to learn 3D grammar, 3D language, 3D thinking? What for? At what level? In which matter? for whom?

  19. A direct multi-volume rendering method aiming at comparisons of 3-D images and models.

    PubMed

    Jacq, J J; Roux, C J

    1997-03-01

    We present a new method for direct volume rendering of multiple three-dimensional (3-D) functions using a density emitter model. This work aims at obtaining visual assessment of the results of a 3-D image registration algorithm which operates on anisotropic and non segmented medical data. We first discuss the fundamentals associated with direct, simultaneous rendering of such datasets. Then, we recall the fuzzy classification and fuzzy surface rendering theory within the density emitter model terminology, and propose an extension of standard direct volume rendering that can handle the rendering of two or more 3-D functions; this consists of the definition of merging rules that are applied on emitter clouds. The included rendering applications are related on one hand, to volume-to-volume registration, and on the other hand, to surface-to-volume registration: the first case is concerned with global elastic registration of CT data, and the second one presents fitting of an implicit surface over a CT data subset. In these two medical imaging application cases, our rendering scheme offers a comprehensive appreciation of the relative position of structural information.

  20. Dynamic 3D-visualization of merged geophysical and geological data sets from the Arctic

    NASA Astrophysics Data System (ADS)

    Jakobsson, M. E.

    2002-12-01

    Bringing together geophysical and geological data sets in a dynamic 3D-environment can greatly enhance our ability to comprehend earth processes. The relationship between, for example, seafloor topography and measured gravity anomalies can easily be visualized as well as the distribution of magnetic anomalies in oceanic crust and their varying offset due to seafloor spreading. In this presentation the gravity derived from ERS-1 satellite altimetry by Laxon and McAdoo (1994) and the magnetic compilation by Verhoef et al. (1996) of the Arctic Ocean is co-registered with the International Bathymetric Chart of the Arctic Ocean (IBCAO) bathymetry and brought into a dynamic 3D-environment for visualization and analysis. This exercise provides information of great value when we address the geologic origin of the Arctic Ocean physiographic provinces. Furthermore, since the ERS-1 gravity and IBCAO bathymetry are two entirely unrelated datasets the gravity may also be used for validating seafloor features seen in the IBCAO compilation that are based on sparse data. For instance, at the eastern most end of the Gakkel Ridge Axial Valley the IBCAO bathymetry is based on digitized contour information from a Russian bathymetric map published in 1999 by the Russian Federation's Head Department of Navigation and Oceanography (HDNO) with no available trackline sources. In the bathymetry, the Axial Valley is clearly seen to continue towards the continental slope of the Laptev Sea and this continuation is supported by the ERS-1 gravity. Another example of bringing together geological and geophysical data sets is from northern Russia, where huge ice lakes were dammed by the Early Weichselian ice sheet at about 90 000 years ago (Mangerud et al., 2001). The damming resulted from blocking the Russian north flowing rivers, supplying most of the fresh water to the Arctic Ocean, by the Ice Sheet margin. These proglacial lakes are reconstructed in our dynamic 3D-environment based on field

  1. 3D Visualization of Sheath Folds in Roman Marble from Ephesus, Turkey

    NASA Astrophysics Data System (ADS)

    Wex, Sebastian; Passchier, Cornelis W.; de Kemp, Eric A.; Ilhan, Sinan

    2013-04-01

    Excavation of a palatial 2nd century AD house (Terrace House Two) in the ancient city of Ephesus, Turkey in the 1970s produced 10.313 pieces of colored, folded marble which belonged to 54 marble plates of 1.6 cm thickness that originally covered the walls of the banquet hall of the house. The marble plates were completely reassembled and restored by a team of workers over the last 6 years. The plates were recognized as having been sawn from two separate large blocks of "Cipollino verde", a green mylonitized marble from Karystos on the Island of Euboea, Greece. After restoration, it became clear that all slabs had been placed on the wall in approximately the sequence in which they had been cut off by a Roman stone saw. As a result, the marble plates give a full 3D insight in the folded internal structure of 1m3 block of mylonite. The restoration of the slabs was recognized as a first, unique opportunity for detailed reconstruction of the 3D geometry of m-scale folds in mylonitized marble. Photographs were taken of each slab and used to reconstruct their exact arrangement within the originally quarried blocks. Outlines of layers were digitized and a full 3D reconstruction of the internal structure of the block was created using ArcMap and GOCAD. Fold structures in the block include curtain folds and multilayered sheath folds. Several different layers showing these structures were digitized on the photographs of the slab surfaces and virtually mounted back together within the model of the marble block. Due to the serial sectioning into slabs, with cm-scale spacing, the visualization of the 3D geometry of sheath folds was accomplished with a resolution better than 4 cm. Final assembled 3D images reveal how sheath folds emerge from continuous layers and show their overall consistency as well as a constant hinge line orientation of the fold structures. Observations suggest that a single deformation phase was responsible for the evolution of "Cipollino verde" structures

  2. Earthscape, a Multi-Purpose Interactive 3d Globe Viewer for Hybrid Data Visualization and Analysis

    NASA Astrophysics Data System (ADS)

    Sarthou, A.; Mas, S.; Jacquin, M.; Moreno, N.; Salamon, A.

    2015-08-01

    The hybrid visualization and interaction tool EarthScape is presented here. The software is able to display simultaneously LiDAR point clouds, draped videos with moving footprint, volume scientific data (using volume rendering, isosurface and slice plane), raster data such as still satellite images, vector data and 3D models such as buildings or vehicles. The application runs on touch screen devices such as tablets. The software is based on open source libraries, such as OpenSceneGraph, osgEarth and OpenCV, and shader programming is used to implement volume rendering of scientific data. The next goal of EarthScape is to perform data analysis using ENVI Services Engine, a cloud data analysis solution. EarthScape is also designed to be a client of Jagwire which provides multisource geo-referenced video fluxes. When all these components will be included, EarthScape will be a multi-purpose platform that will provide at the same time data analysis, hybrid visualization and complex interactions. The software is available on demand for free at france@exelisvis.com.

  3. In situ visualization of magma deformation at high temperature using time-lapse 3D tomography

    NASA Astrophysics Data System (ADS)

    Godinho, jose; Lee, Peter; Lavallee, Yan; Kendrick, Jackie; Von-Aulock, Felix

    2016-04-01

    We use synchrotron based x-ray computed micro-tomography (sCT) to visualize, in situ, the microstructural evolution of magma samples 3 mm diameter with a resolution of 3 μm during heating and uniaxial compression at temperatures up to 1040 °C. The interaction between crystals, melt and gas bubbles is analysed in 4D (3D + time) during sample deformation. The ability to observe the changes of the microstructure as a function of time allow us to: a) study the effect of temperature in the ability of magma to fracture or deform; b) quantify bubble nucleation and growth rates during heating; c) study the relation between crystal displacement and volatile exsolution. We will show unique beautiful videos of how bubbles grow and coalescence, how samples and crystals within the sample fracture, heal and deform. Our study establishes in situ sCT as a powerful tool to quantify and visualize with micro-scale resolution fast processes taking place in magma that are essential to understand ascent in a volcanic conduit and validate existing models for determining the explosivity of volcanic eruptions. Tracking simultaneously the time and spatial changes of magma microstructures is shown to be primordial to study disequilibrium processes between crystals, melt and gas phases.

  4. 3D imaging of microbial biofilms: integration of synchrotron imaging and an interactive visualization interface.

    PubMed

    Thomas, Mathew; Marshall, Matthew J; Miller, Erin A; Kuprat, Andrew P; Kleese-van Dam, Kerstin; Carson, James P

    2014-01-01

    Understanding the structure of microbial biofilms and other complex microbial communities is now possible through x-ray microtomography imaging. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilm biomass in the datasets. These datasets are very large and segmentation often requires manual interventions due to low contrast between objects and high noise levels. New software is required for the effectual interpretation and analysis of such data. This work specifies the evolution and ability to analyze and visualize high resolution x-ray microtomography datasets. Major functionalities include read/write with multiple popular file formats, down-sampling large datasets to generate quick-views on low-power computers, image processing, and generating high quality output images and videos. These capabilities have been wrapped into a new interactive software toolkit, BiofilmViewer. A major focus of our work is to facilitate data transfer and to utilize the capabilities of existing powerful visualization and analytical tools including MATLAB, ImageJ, Paraview, Chimera, Vaa3D, Cell Profiler, Icy, BioImageXD, and Drishti.

  5. Web-based Visualization and Query of semantically segmented multiresolution 3D Models in the Field of Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Auer, M.; Agugiaro, G.; Billen, N.; Loos, L.; Zipf, A.

    2014-05-01

    Many important Cultural Heritage sites have been studied over long periods of time by different means of technical equipment, methods and intentions by different researchers. This has led to huge amounts of heterogeneous "traditional" datasets and formats. The rising popularity of 3D models in the field of Cultural Heritage in recent years has brought additional data formats and makes it even more necessary to find solutions to manage, publish and study these data in an integrated way. The MayaArch3D project aims to realize such an integrative approach by establishing a web-based research platform bringing spatial and non-spatial databases together and providing visualization and analysis tools. Especially the 3D components of the platform use hierarchical segmentation concepts to structure the data and to perform queries on semantic entities. This paper presents a database schema to organize not only segmented models but also different Levels-of-Details and other representations of the same entity. It is further implemented in a spatial database which allows the storing of georeferenced 3D data. This enables organization and queries by semantic, geometric and spatial properties. As service for the delivery of the segmented models a standardization candidate of the OpenGeospatialConsortium (OGC), the Web3DService (W3DS) has been extended to cope with the new database schema and deliver a web friendly format for WebGL rendering. Finally a generic user interface is presented which uses the segments as navigation metaphor to browse and query the semantic segmentation levels and retrieve information from an external database of the German Archaeological Institute (DAI).

  6. Delaunay algorithm and principal component analysis for 3D visualization of mitochondrial DNA nucleoids by Biplane FPALM/dSTORM.

    PubMed

    Alán, Lukáš; Špaček, Tomáš; Ježek, Petr

    2016-07-01

    Data segmentation and object rendering is required for localization super-resolution microscopy, fluorescent photoactivation localization microscopy (FPALM), and direct stochastic optical reconstruction microscopy (dSTORM). We developed and validated methods for segmenting objects based on Delaunay triangulation in 3D space, followed by facet culling. We applied them to visualize mitochondrial nucleoids, which confine DNA in complexes with mitochondrial (mt) transcription factor A (TFAM) and gene expression machinery proteins, such as mt single-stranded-DNA-binding protein (mtSSB). Eos2-conjugated TFAM visualized nucleoids in HepG2 cells, which was compared with dSTORM 3D-immunocytochemistry of TFAM, mtSSB, or DNA. The localized fluorophores of FPALM/dSTORM data were segmented using Delaunay triangulation into polyhedron models and by principal component analysis (PCA) into general PCA ellipsoids. The PCA ellipsoids were normalized to the smoothed volume of polyhedrons or by the net unsmoothed Delaunay volume and remodeled into rotational ellipsoids to obtain models, termed DVRE. The most frequent size of ellipsoid nucleoid model imaged via TFAM was 35 × 45 × 95 nm; or 35 × 45 × 75 nm for mtDNA cores; and 25 × 45 × 100 nm for nucleoids imaged via mtSSB. Nucleoids encompassed different point density and wide size ranges, speculatively due to different activity stemming from different TFAM/mtDNA stoichiometry/density. Considering twofold lower axial vs. lateral resolution, only bulky DVRE models with an aspect ratio >3 and tilted toward the xy-plane were considered as two proximal nucleoids, suspicious occurring after division following mtDNA replication. The existence of proximal nucleoids in mtDNA-dSTORM 3D images of mtDNA "doubling"-supported possible direct observations of mt nucleoid division after mtDNA replication.

  7. Development of 3-D Ice Accretion Measurement Method

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Broeren, Andy P.; Addy, Harold E., Jr.; Sills, Robert; Pifer, Ellen M.

    2012-01-01

    A research plan is currently being implemented by NASA to develop and validate the use of a commercial laser scanner to record and archive fully three-dimensional (3-D) ice shapes from an icing wind tunnel. The plan focused specifically upon measuring ice accreted in the NASA Icing Research Tunnel (IRT). The plan was divided into two phases. The first phase was the identification and selection of the laser scanning system and the post-processing software to purchase and develop further. The second phase was the implementation and validation of the selected system through a series of icing and aerodynamic tests. Phase I of the research plan has been completed. It consisted of evaluating several scanning hardware and software systems against an established selection criteria through demonstrations in the IRT. The results of Phase I showed that all of the scanning systems that were evaluated were equally capable of scanning ice shapes. The factors that differentiated the scanners were ease of use and the ability to operate in a wide range of IRT environmental conditions.

  8. Building a 3D Virtual Liver: Methods for Simulating Blood Flow and Hepatic Clearance on 3D Structures.

    PubMed

    White, Diana; Coombe, Dennis; Rezania, Vahid; Tuszynski, Jack

    2016-01-01

    In this paper, we develop a spatio-temporal modeling approach to describe blood and drug flow, as well as drug uptake and elimination, on an approximation of the liver. Extending on previously developed computational approaches, we generate an approximation of a liver, which consists of a portal and hepatic vein vasculature structure, embedded in the surrounding liver tissue. The vasculature is generated via constrained constructive optimization, and then converted to a spatial grid of a selected grid size. Estimates for surrounding upscaled lobule tissue properties are then presented appropriate to the same grid size. Simulation of fluid flow and drug metabolism (hepatic clearance) are completed using discretized forms of the relevant convective-diffusive-reactive partial differential equations for these processes. This results in a single stage, uniformly consistent method to simulate equations for blood and drug flow, as well as drug metabolism, on a 3D structure representative of a liver. PMID:27649537

  9. Building a 3D Virtual Liver: Methods for Simulating Blood Flow and Hepatic Clearance on 3D Structures

    PubMed Central

    Rezania, Vahid; Tuszynski, Jack

    2016-01-01

    In this paper, we develop a spatio-temporal modeling approach to describe blood and drug flow, as well as drug uptake and elimination, on an approximation of the liver. Extending on previously developed computational approaches, we generate an approximation of a liver, which consists of a portal and hepatic vein vasculature structure, embedded in the surrounding liver tissue. The vasculature is generated via constrained constructive optimization, and then converted to a spatial grid of a selected grid size. Estimates for surrounding upscaled lobule tissue properties are then presented appropriate to the same grid size. Simulation of fluid flow and drug metabolism (hepatic clearance) are completed using discretized forms of the relevant convective-diffusive-reactive partial differential equations for these processes. This results in a single stage, uniformly consistent method to simulate equations for blood and drug flow, as well as drug metabolism, on a 3D structure representative of a liver. PMID:27649537

  10. Attribute-based point cloud visualization in support of 3-D classification

    NASA Astrophysics Data System (ADS)

    Zlinszky, András; Otepka, Johannes; Kania, Adam

    2016-04-01

    Despite the rich information available in LIDAR point attributes through full waveform recording, radiometric calibration and advanced texture metrics, LIDAR-based classification is mostly done in the raster domain. Point-based analyses such as noise removal or terrain filtering are often carried out without visual investigation of the point cloud attributes used. This is because point cloud visualization software usually handle only a limited number of pre-defined point attributes and only allow colorizing the point cloud with one of these at a time. Meanwhile, point cloud classification is rapidly evolving, and uses not only the individual attributes but combinations of these. In order to understand input data and output results better, more advanced methods for visualization are needed. Here we propose an algorithm of the OPALS software package that handles visualization of the point cloud together with its attributes. The algorithm is based on the .odm (OPALS data manager) file format that efficiently handles a large number of pre-defined point attributes and also allows the user to generate new ones. Attributes of interest can be visualized individually, by applying predefined or user-generated palettes in a simple .xml format. The colours of the palette are assigned to the points by setting the respective Red, Green and Blue attributes of the point to result in the colour pre-defined by the palette for the corresponding attribute value. The algorithm handles scaling and histogram equalization based on the distribution of the point attribute to be considered. Additionally, combinations of attributes can be visualized based on RBG colour mixing. The output dataset can be in any standard format where RGB attributes are supported and visualized with conventional point cloud viewing software. Viewing the point cloud together with its attributes allows efficient selection of filter settings and classification parameters. For already classified point clouds, a large

  11. a New Idea of Bim System for Visualization, Web Sharing and Using Huge Complex 3d Models for Facility Management.

    NASA Astrophysics Data System (ADS)

    Fassi, F.; Achille, C.; Mandelli, A.; Rechichi, F.; Parri, S.

    2015-02-01

    The work is the final part of a multi-year research project on the Milan Cathedral, which focused on the complete survey and threedimensional modeling of the Great Spire (Fassi et al., 2011) and the two altars in the transept. The main purpose of the job was to prepare support data for the maintenance operations involving the cathedral since 2009 and still in progress. The research job had begun addressing our efforts to identify which methods would allow an expeditious but comprehensive measure of complex architectural structure as a whole. (Achille et al., 2012) The following research works were focused mainly to find an efficient method to visualize, use and share the realized 3D model.

  12. 3D documentation and visualization of external injury findings by integration of simple photography in CT/MRI data sets (IprojeCT).

    PubMed

    Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula

    2016-05-01

    This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.

  13. 3D documentation and visualization of external injury findings by integration of simple photography in CT/MRI data sets (IprojeCT).

    PubMed

    Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula

    2016-05-01

    This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool. PMID:26496803

  14. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands.

    PubMed

    Mateo, Carlos M; Gil, Pablo; Torres, Fernando

    2016-05-05

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object's surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand's fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments.

  15. 3D Visual Data-Driven Spatiotemporal Deformations for Non-Rigid Object Grasping Using Robot Hands

    PubMed Central

    Mateo, Carlos M.; Gil, Pablo; Torres, Fernando

    2016-01-01

    Sensing techniques are important for solving problems of uncertainty inherent to intelligent grasping tasks. The main goal here is to present a visual sensing system based on range imaging technology for robot manipulation of non-rigid objects. Our proposal provides a suitable visual perception system of complex grasping tasks to support a robot controller when other sensor systems, such as tactile and force, are not able to obtain useful data relevant to the grasping manipulation task. In particular, a new visual approach based on RGBD data was implemented to help a robot controller carry out intelligent manipulation tasks with flexible objects. The proposed method supervises the interaction between the grasped object and the robot hand in order to avoid poor contact between the fingertips and an object when there is neither force nor pressure data. This new approach is also used to measure changes to the shape of an object’s surfaces and so allows us to find deformations caused by inappropriate pressure being applied by the hand’s fingers. Test was carried out for grasping tasks involving several flexible household objects with a multi-fingered robot hand working in real time. Our approach generates pulses from the deformation detection method and sends an event message to the robot controller when surface deformation is detected. In comparison with other methods, the obtained results reveal that our visual pipeline does not use deformations models of objects and materials, as well as the approach works well both planar and 3D household objects in real time. In addition, our method does not depend on the pose of the robot hand because the location of the reference system is computed from a recognition process of a pattern located place at the robot forearm. The presented experiments demonstrate that the proposed method accomplishes a good monitoring of grasping task with several objects and different grasping configurations in indoor environments. PMID

  16. CMAS 3D, a new program to visualize and project major elements compositions in the CMAS system

    NASA Astrophysics Data System (ADS)

    France, L.; Ouillon, N.; Chazot, G.; Kornprobst, J.; Boivin, P.

    2009-06-01

    CMAS 3D, developed in MATLAB ®, is a program to support visualization of major element chemical data in three dimensions. Such projections are used to discuss correlations, metamorphic reactions and the chemical evolution of rocks, melts or minerals. It can also project data into 2D plots. The CMAS 3D interface makes it easy to use, and does not require any knowledge of Matlab ® programming. CMAS 3D uses data compiled in a Microsoft Excel™ spreadsheet. Although useful for scientific research, the program is also a powerful tool for teaching.

  17. Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT

    NASA Astrophysics Data System (ADS)

    Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake

    2015-03-01

    Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional

  18. Use and Evaluation of 3D GeoWall Visualizations in Undergraduate Space Science Classes

    NASA Astrophysics Data System (ADS)

    Turner, N. E.; Hamed, K. M.; Lopez, R. E.; Mitchell, E. J.; Gray, C. L.; Corralez, D. S.; Robinson, C. A.; Soderlund, K. M.

    2005-12-01

    One persistent difficulty many astronomy students face is the lack of 3- dimensional mental model of the systems being studied, in particular the Sun-Earth-Moon system. Students without such a mental model can have a very hard time conceptualizing the geometric relationships that cause, for example, the cycle of lunar phases or the pattern of seasons. The GeoWall is a recently developed and affordable projection mechanism for three-dimensional stereo visualization which is becoming a popular tool in classrooms and research labs for use in geology classes, but as yet very little work has been done involving the GeoWall for astronomy classes. We present results from a large study involving over 1000 students of varied backgrounds: some students were tested at the University of Texas at El Paso, a large public university on the US-Mexico border and other students were from the Florida Institute of Technology, a small, private, technical school in Melbourne Florida. We wrote a lecture tutorial-style lab to go along with a GeoWall 3D visual of the Earth-Moon system and tested the students before and after with several diagnostics. Students were given pre and post tests using the Lunar Phase Concept Inventory (LPCI) as well as a separate evaluation written specifically for this project. We found the lab useful for both populations of students, but not equally effective for all. We discuss reactions from the students and their improvement, as well as whether the students are able to correctly assess the usefullness of the project for their own learning.

  19. A novel alternative method for 3D visualisation in Parasitology: the construction of a 3D model of a parasite from 2D illustrations.

    PubMed

    Teo, B G; Sarinder, K K S; Lim, L H S

    2010-08-01

    Three-dimensional (3D) models of the marginal hooks, dorsal and ventral anchors, bars and haptoral reservoirs of a parasite, Sundatrema langkawiense Lim & Gibson, 2009 (Monogenea) were developed using the polygonal modelling method in Autodesk 3ds Max (Version 9) based on two-dimensional (2D) illustrations. Maxscripts were written to rotate the modelled 3D structures. Appropriately orientated 3D haptoral hard-parts were then selected and positioned within the transparent 3D outline of the haptor and grouped together to form a complete 3D haptoral entity. This technique is an inexpensive tool for constructing 3D models from 2D illustrations for 3D visualisation of the spatial relationships between the different structural parts within organisms. PMID:20962723

  20. A 3D Earth orbit model; visualization and analysis of Milankovitch cycles and insolation

    NASA Astrophysics Data System (ADS)

    Gilb, R. D.; Kostadinov, T. S.

    2012-12-01

    An astronomically precise and accurate Earth orbit graphical model, Earth orbit v2.0, is presented. The model offers 3D visualizations of Earth's orbital geometry, Milankovitch parameters and the ensuing insolation forcings. Prevalent paleoclimatic theories invoke Milankovitch cycles as a major forcing mechanism capable of shifting Earth's climate regimes on time scales of tens to hundreds of thousands of years. Variability of eccentricity (ellipticity of orbit), precession (longitude of perihelion) and obliquity (Earth's axial tilt) changes parameters such as amplitude of seasonal insolation, timing of seasons with respect to perihelion, and total annual insolation. Hays et al. (1976) demonstrated a strong link between Milankovitch cycles and paleoclimatological records, which has been confirmed and expanded many times since (e.g. Berger et al., 1994; Berger et al., 2010). The complex interplay of several orbital parameters on various time scales makes assessment and visualization of Earth's orbit and spatio-temporal insolation variability challenging. It is difficult to appreciate the pivotal importance of Kepler's laws of planetary motion in controlling the effects of Milankovitch cycles on insolation patterns on various spatio-temporal scales. These factors also make Milankovitch theory difficult to teach effectively. The model allows substantial user control in a robust, yet intuitive and user-friendly graphical user interface (GUI) developed in Matlab. We present the user with a choice between Berger et al. (1978) and Laskar et al. (2004) astronomical solutions for eccentricity, obliquity and precession. Berger solutions span from -1 Myr to +1 Myr, while Laskar provides solutions from -101 Myr to +21 Myr since J2000. Users can also choose a "demo" mode which allows the three Milankovitch parameters to be varied independently of each other, so the user can isolate the effects of each on orbital geometry and insolation. For example, extreme eccentricity can be

  1. 3D Visualization of Monte-Carlo Simulation's of HZE Track Structure and Initial Chemical Species

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2009-01-01

    Heavy ions biophysics is important for space radiation risk assessment [1] and hadron-therapy [2]. The characteristic of heavy ions tracks include a very high energy deposition region close to the track (<20 nm) denoted as the track core, and an outer penumbra region consisting of individual secondary electrons (6-rays). A still open question is the radiobiological effects of 6- rays relative to the track core. Of importance is the induction of double-strand breaks (DSB) [3] and oxidative damage to the biomolecules and the tissue matrix, considered the most important lesions for acute and long term effects of radiation. In this work, we have simulated a 56Fe26+ ion track of 1 GeV/amu with our Monte-Carlo code RITRACKS [4]. The simulation results have been used to calculate the energy depiction and initial chemical species in a "voxelized" space, which is then visualized in 3D. Several voxels with dose >1000 Gy are found in the penumbra, some located 0.1 mm from the track core. In computational models, the DSB induction probability is calculated with radial dose [6], which may not take into account the higher RBE of electron track ends for DSB induction. Therefore, these simulations should help improve models of DSB induction and our understanding of heavy ions biophysics.

  2. 3D visualization of sheath folds in Ancient Roman marble wall coverings from Ephesos, Turkey

    NASA Astrophysics Data System (ADS)

    Wex, Sebastian; Passchier, Cees W.; de Kemp, Eric A.; İlhan, Sinan

    2014-10-01

    Archaeological excavations and restoration of a palatial Roman housing complex in Ephesos, Turkey yielded 40 wall-decorating plates of folded mylonitic marble (Cipollino verde), derived from the internal Hellenides near Karystos, Greece. Cipollino verde was commonly used for decoration purposes in Roman buildings. The plates were serial-sectioned from a single quarried block of 1,25 m3 and provided a research opportunity for detailed reconstruction of the 3D geometry of meterscale folds in mylonitized marble. A GOCAD model is used to visualize the internal fold structures of the marble, comprising curtain folds and multilayered sheath folds. The sheath folds are unusual in that they have their intermediate axis normal to the parent layering. This agrees with regional tectonic studies, which suggest that Cipollino verde structures formed by local constrictional non-coaxial flow. Sheath fold cross-section geometry, exposed on the surface of a plate or outcrop, is found to be independent of the intersection angle of the fold structure with the studied plane. Consequently, a single surface cannot be used as an indicator of the three-dimensional geometry of transected sheath folds.

  3. Shifting Sands and Turning Tides: Using 3D Visualization Technology to Shape the Environment for Undergraduate Students

    NASA Astrophysics Data System (ADS)

    Jenkins, H. S.; Gant, R.; Hopkins, D.

    2014-12-01

    Teaching natural science in a technologically advancing world requires that our methods reach beyond the traditional computer interface. Innovative 3D visualization techniques and real-time augmented user interfaces enable students to create realistic environments to understand the world around them. Here, we present a series of laboratory activities that utilize an Augmented Reality Sandbox to teach basic concepts of hydrology, geology, and geography to undergraduates at Harvard University and the University of Redlands. The Augmented Reality (AR) Sandbox utilizes a real sandbox that is overlain by a digital projection of topography and a color elevation map. A Microsoft Kinect 3D camera feeds altimetry data into a software program that maps this information onto the sand surface using a digital projector. Students can then manipulate the sand and observe as the Sandbox augments their manipulations with projections of contour lines, an elevation color map, and a simulation of water. The idea for the AR Sandbox was conceived at MIT by the Tangible Media Group in 2002 and the simulation software used here was written and developed by Dr. Oliver Kreylos of the University of California - Davis as part of the NSF funded LakeViz3D project. Between 2013 and 2014, we installed AR Sandboxes at Harvard and the University of Redlands, respectively, and developed laboratory exercises to teach flooding hazard, erosion and watershed development in undergraduate earth and environmental science courses. In 2013, we introduced a series of AR Sandbox laboratories in Introductory Geology, Hydrology, and Natural Disasters courses. We found laboratories that utilized the AR Sandbox at both universities allowed students to become quickly immersed in the learning process, enabling a more intuitive understanding of the processes that govern the natural world. The physical interface of the AR Sandbox reduces barriers to learning, can be used to rapidly illustrate basic concepts of geology

  4. 3-D visualization and non-linear tissue classification of breast tumors using ultrasound elastography in vivo.

    PubMed

    Sayed, Ahmed; Layne, Ginger; Abraham, Jame; Mukdadi, Osama M

    2014-07-01

    The goal of the study described here was to introduce new methods for the classification and visualization of human breast tumors using 3-D ultrasound elastography. A tumor's type, shape and size are key features that can help the physician to decide the sort and extent of necessary treatment. In this work, tumor type, being either benign or malignant, was classified non-invasively for nine volunteer patients. The classification was based on estimating four parameters that reflect the tumor's non-linear biomechanical behavior, under multi-compression levels. Tumor prognosis using non-linear elastography was confirmed with biopsy as a gold standard. Three tissue classification parameters were found to be statistically significant with a p-value < 0.05, whereas the fourth non-linear parameter was highly significant, having a p-value < 0.001. Furthermore, each breast tumor's shape and size were estimated in vivo using 3-D elastography, and were enhanced using interactive segmentation. Segmentation with level sets was used to isolate the stiff tumor from the surrounding soft tissue. Segmentation also provided a reliable means to estimate tumors volumes. Four volumetric strains were investigated: the traditional normal axial strain, the first principal strain, von Mises strain and maximum shear strain. It was noted that these strains can provide varying degrees of boundary enhancement to the stiff tumor in the constructed elastograms. The enhanced boundary improved the performance of the segmentation process. In summary, the proposed methods can be employed as a 3-D non-invasive tool for characterization of breast tumors, and may provide early prognosis with minimal pain, as well as diminish the risk of late-stage breast cancer.

  5. 3D visualization and quantification of bone and teeth mineralization for the study of osteo/dentinogenesis in mice models

    NASA Astrophysics Data System (ADS)

    Marchadier, A.; Vidal, C.; Ordureau, S.; Lédée, R.; Léger, C.; Young, M.; Goldberg, M.

    2011-03-01

    Research on bone and teeth mineralization in animal models is critical for understanding human pathologies. Genetically modified mice represent highly valuable models for the study of osteo/dentinogenesis defects and osteoporosis. Current investigations on mice dental and skeletal phenotype use destructive and time consuming methods such as histology and scanning microscopy. Micro-CT imaging is quicker and provides high resolution qualitative phenotypic description. However reliable quantification of mineralization processes in mouse bone and teeth are still lacking. We have established novel CT imaging-based software for accurate qualitative and quantitative analysis of mouse mandibular bone and molars. Data were obtained from mandibles of mice lacking the Fibromodulin gene which is involved in mineralization processes. Mandibles were imaged with a micro-CT originally devoted to industrial applications (Viscom, X8060 NDT). 3D advanced visualization was performed using the VoxBox software (UsefulProgress) with ray casting algorithms. Comparison between control and defective mice mandibles was made by applying the same transfer function for each 3D data, thus allowing to detect shape, colour and density discrepencies. The 2D images of transverse slices of mandible and teeth were similar and even more accurate than those obtained with scanning electron microscopy. Image processing of the molars allowed the 3D reconstruction of the pulp chamber, providing a unique tool for the quantitative evaluation of dentinogenesis. This new method is highly powerful for the study of oro-facial mineralizations defects in mice models, complementary and even competitive to current histological and scanning microscopy appoaches.

  6. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction. PMID:26660897

  7. Designing patient-specific 3D printed craniofacial implants using a novel topology optimization method.

    PubMed

    Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H

    2016-07-01

    Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.

  8. Visualizing the 3D Architecture of Multiple Erythrocytes Infected with Plasmodium at Nanoscale by Focused Ion Beam-Scanning Electron Microscopy

    PubMed Central

    Soares Medeiros, Lia Carolina; De Souza, Wanderley; Jiao, Chengge; Barrabin, Hector; Miranda, Kildare

    2012-01-01

    Different methods for three-dimensional visualization of biological structures have been developed and extensively applied by different research groups. In the field of electron microscopy, a new technique that has emerged is the use of a focused ion beam and scanning electron microscopy for 3D reconstruction at nanoscale resolution. The higher extent of volume that can be reconstructed with this instrument represent one of the main benefits of this technique, which can provide statistically relevant 3D morphometrical data. As the life cycle of Plasmodium species is a process that involves several structurally complex developmental stages that are responsible for a series of modifications in the erythrocyte surface and cytoplasm, a high number of features within the parasites and the host cells has to be sampled for the correct interpretation of their 3D organization. Here, we used FIB-SEM to visualize the 3D architecture of multiple erythrocytes infected with Plasmodium chabaudi and analyzed their morphometrical parameters in a 3D space. We analyzed and quantified alterations on the host cells, such as the variety of shapes and sizes of their membrane profiles and parasite internal structures such as a polymorphic organization of hemoglobin-filled tubules. The results show the complex 3D organization of Plasmodium and infected erythrocyte, and demonstrate the contribution of FIB-SEM for the obtainment of statistical data for an accurate interpretation of complex biological structures. PMID:22432024

  9. 3D Visualization of Solar Data: Preparing for Solar Orbiter and Solar Probe Plus

    NASA Astrophysics Data System (ADS)

    Mueller, D.; Felix, S.; Meier, S.; Csillaghy, A.; Nicula, B.; Verstringe, F.; Bourgoignie, B.; Berghmans, D.; Jiggens, P.

    2014-12-01

    The next generation of ESA/NASA heliophysics missions, Solar Orbiter and Solar Probe Plus, will focus on exploring the linkage between the Sun and the heliosphere. These new missions will collect unique data that will allow us to study, e.g., the coupling between macroscopic physical processes to those on kinetic scales, the generation of solar energetic particles and their propagation into the heliosphere and the origin and acceleration of solar wind plasma. Since 2010, NASA's Solar Dynamics Observatory returns 1.4 TB/day of high-resolution solar images, magnetograms and EUV irradiance data. Within a few years, the scientific community will thus have access to petabytes of multi­dimensional remote­sensing and complex in-situ observations from different vantage points, complemented by petabytes of simulation data. Answering overarching science questions like "How do solar transients drive heliospheric variability and space weather?" will only be possible if the community has the necessary tools at hand. As of today, there is an obvious lack of capability to both visualize these data and assimilate them into sophisticated models to advance our knowledge. A key piece needed to bridge the gap between observables, derived quantities like magnetic field extrapolations and model output is a tool to routinely and intuitively visualize large heterogeneous, multidimensional, time­dependent data sets. As of today, the space science community is lacking the means to do this (i) on a routine basis, (ii) for complex multi­dimensional data sets from various instruments and vantage points and (iii) in an extensible and modular way that is open for future improvements and interdisciplinary usage. In this contribution, we will present recent progress in visualizing the Sun and its magnetic field in 3D using the open-source JHelioviewer framework, which is part of the ESA/NASA Helioviewer Project. Among other features, JHelioviewer offers efficient region-of-interest-based data

  10. A 3D Visualization and Analysis Model of the Earth Orbit, Milankovitch Cycles and Insolation.

    NASA Astrophysics Data System (ADS)

    Kostadinov, Tihomir; Gilb, Roy

    2013-04-01

    Milankovitch theory postulates that periodic variability of Earth's orbital elements is a major climate forcing mechanism. Although controversies remain, ample geologic evidence supports the major role of the Milankovitch cycles in climate, e.g. glacial-interglacial cycles. There are three Milankovitch orbital parameters: orbital eccentricity (main periodicities of ~100,000 and ~400,000 years), precession (quantified as the longitude of perihelion, main periodicities 19,000-24,000 years) and obliquity of the ecliptic (Earth's axial tilt, main periodicity 41,000 years). The combination of these parameters controls the spatio-temporal patterns of incoming solar radiation (insolation) and the timing of the seasons with respect to perihelion, as well as season duration. The complex interplay of the Milankovitch orbital parameters on various time scales makes assessment and visualization of Earth's orbit and insolation variability challenging. It is difficult to appreciate the pivotal importance of Kepler's laws of planetary motion in controlling the effects of Milankovitch cycles on insolation patterns. These factors also make Earth-Sun geometry and Milankovitch theory difficult to teach effectively. Here, an astronomically precise and accurate Earth orbit visualization model is presented. The model offers 3D visualizations of Earth's orbital geometry, Milankovitch parameters and the ensuing insolation forcings. Both research and educational uses are envisioned for the model, which is developed in Matlab® as a user-friendly graphical user interface (GUI). We present the user with a choice between the Berger et al. (1978) and Laskar et al. (2004) astronomical solutions for eccentricity, obliquity and precession. A "demo" mode is also available, which allows the three Milankovitch parameters to be varied independently of each other (and over much larger ranges than the naturally occurring ones), so the user can isolate the effects of each parameter on orbital geometry

  11. Method for 3D fibre reconstruction on a microrobotic platform.

    PubMed

    Hirvonen, J; Myllys, M; Kallio, P

    2016-07-01

    Automated handling of a natural fibrous object requires a method for acquiring the three-dimensional geometry of the object, because its dimensions cannot be known beforehand. This paper presents a method for calculating the three-dimensional reconstruction of a paper fibre on a microrobotic platform that contains two microscope cameras. The method is based on detecting curvature changes in the fibre centreline, and using them as the corresponding points between the different views of the images. We test the developed method with four fibre samples and compare the results with the references measured with an X-ray microtomography device. We rotate the samples through 16 different orientations on the platform and calculate the three-dimensional reconstruction to test the repeatability of the algorithm and its sensitivity to the orientation of the sample. We also test the noise sensitivity of the algorithm, and record the mismatch rate of the correspondences provided. We use the iterative closest point algorithm to align the measured three-dimensional reconstructions with the references. The average point-to-point distances between the reconstructed fibre centrelines and the references are 20-30 μm, and the mismatch rate is low. Given the manipulation tolerance, this shows that the method is well suited to automated fibre grasping. This has also been demonstrated with actual grasping experiments. PMID:26695385

  12. A 3D shape retrieval method for orthogonal fringe projection based on a combination of variational image decomposition and variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Biyuan; Tang, Chen; Zhu, Xinjun; Chen, Xia; Su, Yonggang; Cai, Yuanxue

    2016-11-01

    The orthogonal fringe projection technique has as wide as long practical application nowadays. In this paper, we propose a 3D shape retrieval method for orthogonal composite fringe projection based on a combination of variational image decomposition (VID) and variational mode decomposition (VMD). We propose a new image decomposition model to extract the orthogonal fringe. Then we introduce the VMD method to separate the horizontal and vertical fringe from the orthogonal fringe. Lastly, the 3D shape information is obtained by the differential 3D shape retrieval method (D3D). We test the proposed method on a simulated pattern and two actual objects with edges or abrupt changes in height, and compare with the recent, related and advanced differential 3D shape retrieval method (D3D) in terms of both quantitative evaluation and visual quality. The experimental results have demonstrated the validity of the proposed method.

  13. Boundary estimation method for ultrasonic 3D imaging

    NASA Astrophysics Data System (ADS)

    Ohashi, Gosuke; Ohya, Akihisa; Natori, Michiya; Nakajima, Masato

    1993-09-01

    The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.

  14. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods.

    PubMed

    van Velden, Floris H P; Kloet, Reina W; van Berckel, Bart N M; Wolfensberger, Saskia P A; Lammertsma, Adriaan A; Boellaard, Ronald

    2008-06-21

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  15. Comparison of 3D-OP-OSEM and 3D-FBP reconstruction algorithms for High-Resolution Research Tomograph studies: effects of randoms estimation methods

    NASA Astrophysics Data System (ADS)

    van Velden, Floris H. P.; Kloet, Reina W.; van Berckel, Bart N. M.; Wolfensberger, Saskia P. A.; Lammertsma, Adriaan A.; Boellaard, Ronald

    2008-06-01

    The High-Resolution Research Tomograph (HRRT) is a dedicated human brain positron emission tomography (PET) scanner. Recently, a 3D filtered backprojection (3D-FBP) reconstruction method has been implemented to reduce bias in short duration frames, currently observed in 3D ordinary Poisson OSEM (3D-OP-OSEM) reconstructions. Further improvements might be expected using a new method of variance reduction on randoms (VRR) based on coincidence histograms instead of using the delayed window technique (DW) to estimate randoms. The goal of this study was to evaluate VRR in combination with 3D-OP-OSEM and 3D-FBP reconstruction techniques. To this end, several phantom studies and a human brain study were performed. For most phantom studies, 3D-OP-OSEM showed higher accuracy of observed activity concentrations with VRR than with DW. However, both positive and negative deviations in reconstructed activity concentrations and large biases of grey to white matter contrast ratio (up to 88%) were still observed as a function of scan statistics. Moreover 3D-OP-OSEM+VRR also showed bias up to 64% in clinical data, i.e. in some pharmacokinetic parameters as compared with those obtained with 3D-FBP+VRR. In the case of 3D-FBP, VRR showed similar results as DW for both phantom and clinical data, except that VRR showed a better standard deviation of 6-10%. Therefore, VRR should be used to correct for randoms in HRRT PET studies.

  16. Microscopic spin Hamiltonian approaches for 3d8 and 3d2 ions in a trigonal crystal field - perturbation theory methods versus complete diagonalization methods

    NASA Astrophysics Data System (ADS)

    Rudowicz, Czeslaw; Yeung, Yau-yuen; Yang, Zi-Yuan; Qin, Jian

    2002-06-01

    In this paper, we critically review the existing microscopic spin Hamiltonian (MSH) approaches, namely the complete diagonalization method (CDM) and the perturbation theory method (PTM), for 3d8(3d2) ions in a trigonal (C3v, D3, D3d) symmetry crystal field (CF). A new CDM is presented and a CFA/MSH computer package based on our crystal-field analysis (CFA) package for 3dN ions is developed for numerical calculations. Our method takes into account the contribution to the SH parameters (D, g∥ and g⊥) from all 45 CF states for 3d8(3d2) ions and is based on the complete diagonalization of the Hamiltonian including the electrostatic interactions, the CF terms (in the intermediate CF scheme) and the spin-orbit coupling. The CFA/MSH package enables us to study not only the CF energy levels and wavefunctions but also the SH parameters as functions of the CF parameters (B20, B40 and B43 or alternatively Dq, v and v') for 3d8(3d2) ions in trigonal symmetry. Extensive comparative studies of other MSH approaches are carried out using the CFA/MSH package. First, we check the accuracy of the approximate PTM based on the `quasi-fourth-order' perturbation formulae developed by Petrosyan and Mirzakhanyan (PM). The present investigations indicate that the PM formulae for the g-factors (g∥ and g⊥) indeed work well, especially for the cases of small v and v' and large Dq, whereas the PM formula for the zero-field splitting (ZFS) exhibits serious shortcomings. Earlier criticism of the PM approach by Zhou et al (Zhou K W, Zhao S B, Wu P F and Xie J K 1990 Phys. Status Solidi b 162 193) is then revisited. Second, we carry out an extensive comparison of the results of the present CFA/MSH package and those of other CDMs based on the strong- and weak-CF schemes. The CF energy levels and the SH parameters for 3d2 and 3d8 ions at C3v symmetry sites in several crystals are calculated and analysed. Our investigations reveal serious inconsistencies in the CDM results of Zhou et al and Li

  17. GPU Accelerated Spectral Element Methods: 3D Euler equations

    NASA Astrophysics Data System (ADS)

    Abdi, D. S.; Wilcox, L.; Giraldo, F.; Warburton, T.

    2015-12-01

    A GPU accelerated nodal discontinuous Galerkin method for the solution of three dimensional Euler equations is presented. The Euler equations are nonlinear hyperbolic equations that are widely used in Numerical Weather Prediction (NWP). Therefore, acceleration of the method plays an important practical role in not only getting daily forecasts faster but also in obtaining more accurate (high resolution) results. The equation sets used in our atomospheric model NUMA (non-hydrostatic unified model of the atmosphere) take into consideration non-hydrostatic effects that become more important with high resolution. We use algorithms suitable for the single instruction multiple thread (SIMT) architecture of GPUs to accelerate solution by an order of magnitude (20x) relative to CPU implementation. For portability to heterogeneous computing environment, we use a new programming language OCCA, which can be cross-compiled to either OpenCL, CUDA or OpenMP at runtime. Finally, the accuracy and performance of our GPU implementations are veried using several benchmark problems representative of different scales of atmospheric dynamics.

  18. Multi-crosswell profile 3D imaging and method

    DOEpatents

    Washbourne, John K.; Rector, III, James W.; Bube, Kenneth P.

    2002-01-01

    Characterizing the value of a particular property, for example, seismic velocity, of a subsurface region of ground is described. In one aspect, the value of the particular property is represented using at least one continuous analytic function such as a Chebychev polynomial. The seismic data may include data derived from at least one crosswell dataset for the subsurface region of interest and may also include other data. In either instance, data may simultaneously be used from a first crosswell dataset in conjunction with one or more other crosswell datasets and/or with the other data. In another aspect, the value of the property is characterized in three dimensions throughout the region of interest using crosswell and/or other data. In still another aspect, crosswell datasets for highly deviated or horizontal boreholes are inherently useful. The method is performed, in part, by fitting a set of vertically spaced layer boundaries, represented by an analytic function such as a Chebychev polynomial, within and across the region encompassing the boreholes such that a series of layers is defined between the layer boundaries. Initial values of the particular property are then established between the layer boundaries and across the subterranean region using a series of continuous analytic functions. The continuous analytic functions are then adjusted to more closely match the value of the particular property across the subterranean region of ground to determine the value of the particular property for any selected point within the region.

  19. Inhomogeneous Media 3D EM Modeling with Integral Equation Method

    NASA Astrophysics Data System (ADS)

    di, Q.; Wang, R.; An, Z.; Fu, C.; Xu, C.

    2010-12-01

    In general, only the half space of earth is considered in electromagnetic exploration. However, for the long bipole source, because the length is close to the height of ionosphere and also most offsets between source and receivers are equal or larger than the height of ionosphere, the effect of ionosphere on the electromagnetic (EM) field should be considered when observation is carried at a very far (about several thousands kilometers) location away from the source. At this point the problem becomes one which should contain ionosphere, atmosphere and earth that is “earth-ionosphere” case. There are a few of literatures to report the electromagnetic field results which is including ionosphere, atmosphere and earth media at the same time. We firstly calculate the electromagnetic fields with the traditional controlled source (CSEM) configuration using integral equation (IE) method for a three layers earth-ionosphere model. The modeling results agree well with the half space analytical results because the effect of ionosphere for this small scale bipole source can be ignorable. The comparison of small scale three layers earth-ionosphere modeling and half space analytical resolution shows that the IE method can be used to modeling the EM fields for long bipole large offset configuration. In order to discuss EM fields’ characteristics for complicate earth-ionosphere media excited by long bipole source in the far-field and wave-guide zones, we first modeled the decay characters of electromagnetic fields for three layers earth-ionosphere model. Because of the effect of ionosphere, the earth-ionosphere electromagnetic fields’ decay curves with given frequency show that there should be an extra wave guide zone for long bipole artificial source, and there are many different characters between this extra zone and far field zone. They are: 1) the amplitudes of EM fields decay much slower; 2) the polarization patterns change; 3) the positions better to measure Zxy and

  20. 3D Visualization of "Frozen" Dynamic Magma Chambers in the Duluth Complex, Northeastern Minnesota

    NASA Astrophysics Data System (ADS)

    Peterson, D. M.; Hauck, S. A.

    2005-12-01

    The Mesoproterozoic Duluth Complex and associated intrusions of the Midcontinent Rift in northeastern Minnesota constitute one of the largest, semi-continuous, mafic intrusive complexes in the world, second only to the Bushveld Complex of South Africa. These rocks cover an arcuate area of over 5,000 square kilometers and give rise to two strong gravity anomalies (+50 & +70 mgal) that imply intrusive roots to more than 13 km depth. The geometry of three large mafic intrusions within the Duluth Complex have been modeled by the integration of field mapping and drill hole data with maps of gravity and magnetic anomalies. The igneous bodies include the South Kawishiwi, Partridge River, and Bald Eagle intrusions that collectively outcrop over an area of > 800 square kilometers. The South Kawishiwi and Partridge River intrusions host several billion tons of low-grade Cu-Ni-PGE mineralization near their base, while the geophysical expressions of the Bald Eagle intrusion have the same shape and dimensions as the "bulls eye" pattern of low velocity seismic reflection anomalies along the East Pacific Rise. These anomalies are interpreted to define regions of melt concentrations, i.e., active magma chambers. This suggests that the funnel-shaped Bald Eagle intrusion could be an example of a "frozen" dynamic magma chamber. In support of this analogy we note that the magmatic systems of intracontinental rifts, mid-ocean ridges, extensional regimes in back-arc environments, and ophiolites have a common characteristic: the emplacement of magma in extensional environments, and the common products in all four are varieties of layered intrusions, dikes and sills, and overlying volcanic rocks. 3D visualization of these intrusions is integral to the understanding of the Duluth Complex magmatic system and associated mineralization, and can be used as a proxy for study of similar systems, such as the Antarctic Ferrar dolerites, worldwide.

  1. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  2. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  3. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation.

    PubMed

    Tracey, Jeff A; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R; Fisher, Robert N

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species--giant panda, dugong, and California condor--to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  4. Audio-Visual Perception of 3D Cinematography: An fMRI Study Using Condition-Based and Computation-Based Analyses

    PubMed Central

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli

  5. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    PubMed

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID

  6. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  7. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  8. Extending a teleradiology system by tools for 3D-visualization and volumetric analysis through a plug-in mechanism.

    PubMed

    Evers, H; Mayer, A; Engelmann, U; Schröter, A; Baur, U; Wolsiffer, K; Meinzer, H P

    1998-01-01

    This paper describes ongoing research concerning interactive volume visualization coupled with tools for volumetric analysis. To establish an easy to use application, the 3D-visualization has been embedded in a state of the art teleradiology system, where additional functionality is often desired beyond basic image transfer and management. Major clinical requirements for deriving spatial measures are covered by the tools, in order to realize extended diagnosis support and therapy planning. Introducing the general plug-in mechanism this work exemplarily describes the useful extension of an approved application. Interactive visualization was achieved by a hybrid approach taking advantage of both the precise volume visualization based on the Heidelberg Raytracing Model and the graphics acceleration of modern workstations. Several tools for volumetric analysis extend the 3D-viewing. They offer 3D-pointing devices to select locations in the data volume, measure anatomical structures or control segmentation processes. A haptic interface provides a realistic perception while navigating within the 3D-reconstruction. The work is closely related to research work in the field of heart, liver and head surgery. In cooperation with our medical partners the development of tools as presented proceed the integration of image analysis into clinical routine. PMID:10384617

  9. 3D Simulation Technology as an Effective Instructional Tool for Enhancing Spatial Visualization Skills in Apparel Design

    ERIC Educational Resources Information Center

    Park, Juyeon; Kim, Dong-Eun; Sohn, MyungHee

    2011-01-01

    The purpose of this study is to explore the effectiveness of 3D simulation technology for enhancing spatial visualization skills in apparel design education and further to suggest an innovative teaching approach using the technology. Apparel design majors in an introductory patternmaking course, at a large Midwestern University in the United…

  10. Services Oriented Smart City Platform Based On 3d City Model Visualization

    NASA Astrophysics Data System (ADS)

    Prandi, F.; Soave, M.; Devigili, F.; Andreolli, M.; De Amicis, R.

    2014-04-01

    The rapid technological evolution, which is characterizing all the disciplines involved within the wide concept of smart cities, is becoming a key factor to trigger true user-driven innovation. However to fully develop the Smart City concept to a wide geographical target, it is required an infrastructure that allows the integration of heterogeneous geographical information and sensor networks into a common technological ground. In this context 3D city models will play an increasingly important role in our daily lives and become an essential part of the modern city information infrastructure (Spatial Data Infrastructure). The work presented in this paper describes an innovative Services Oriented Architecture software platform aimed at providing smartcities services on top of 3D urban models. 3D city models are the basis of many applications and can became the platform for integrating city information within the Smart-Cites context. In particular the paper will investigate how the efficient visualisation of 3D city models using different levels of detail (LODs) is one of the pivotal technological challenge to support Smart-Cities applications. The goal is to provide to the final user realistic and abstract 3D representations of the urban environment and the possibility to interact with a massive amounts of semantic information contained into the geospatial 3D city model. The proposed solution, using OCG standards and a custom service to provide 3D city models, lets the users to consume the services and interact with the 3D model via Web in a more effective way.

  11. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  12. From digital mapping to GIS-based 3D visualization of geological maps: example from the Western Alps geological units

    NASA Astrophysics Data System (ADS)

    Balestro, Gianni; Cassulo, Roberto; Festa, Andrea; Fioraso, Gianfranco; Nicolò, Gabriele; Perotti, Luigi

    2015-04-01

    Collection of field geological data and sharing of geological maps are nowadays greatly enhanced by using digital tools and IT (Information Technology) applications. Portable hardware allows accurate GPS localization of data and homogeneous storing of information in field databases, whereas GIS (Geographic Information Systems) applications enable generalization of field data and realization of geological map databases. A further step in the digital processing of geological map information consists of building virtual visualization by means of GIS-based 3D viewers, that allow projection and draping of significant geological features over photo-realistic terrain models. Digital fieldwork activities carried out by the Authors in the Western Alps, together with building of geological map databases and related 3D visualizations, are an example of application of the above described digital technologies. Digital geological mapping was performed by means of a GIS mobile software loaded on a rugged handheld device, and lithological, structural and geomorphological features with their attributes were stored in different layers that form the field database. The latter was then generalized through usual map processing steps such as outcrops interpolation, characterization of geological boundaries and selection of meaningful punctual observations. This map databases was used for building virtual visualizations through a GIS-based 3D-viewer that loaded detailed DTM (resolution of 5 meters) and aerial images. 3D visualizations were focused on projection and draping of significant stratigraphic contacts (e.g. contacts that separate different Quaternary deposits) and tectonic contacts (i.e. exhumation-related contacts that dismembered original ophiolite sequences). In our experience digital geological mapping and related databases ensured homogeneous data storing and effective sharing of information, and allowed subsequent building of 3D GIS-based visualizations. The latters gave

  13. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  14. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  15. A method for 3D reconstruction of coronary arteries using biplane angiography and intravascular ultrasound images.

    PubMed

    Bourantas, Christos V; Kourtis, Iraklis C; Plissiti, Marina E; Fotiadis, Dimitrios I; Katsouras, Christos S; Papafaklis, Michail I; Michalis, Lampros K

    2005-12-01

    The aim of this study is to describe a new method for the three-dimensional reconstruction of coronary arteries and its quantitative validation. Our approach is based on the fusion of the data provided by intravascular ultrasound images (IVUS) and biplane angiographies. A specific segmentation algorithm is used for the detection of the regions of interest in intravascular ultrasound images. A new methodology is also introduced for the accurate extraction of the catheter path. In detail, a cubic B-spline is used for approximating the catheter path in each biplane projection. Each B-spline curve is swept along the normal direction of its X-ray angiographic plane forming a surface. The intersection of the two surfaces is a 3D curve, which represents the reconstructed path. The detected regions of interest in the IVUS images are placed perpendicularly onto the path and their relative axial twist is computed using the sequential triangulation algorithm. Then, an efficient algorithm is applied to estimate the absolute orientation of the first IVUS frame. In order to obtain 3D visualization the commercial package Geomagic Studio 4.0 is used. The performance of the proposed method is assessed using a validation methodology which addresses the separate validation of each step followed for obtaining the coronary reconstruction. The performance of the segmentation algorithm was examined in 80 IVUS images. The reliability of the path extraction method was studied in vitro using a metal wire model and in vivo in a dataset of 11 patients. The performance of the sequential triangulation algorithm was tested in two gutter models and in the coronary arteries (marked with metal clips) of six cadaveric sheep hearts. Finally, the accuracy in the estimation of the first IVUS frame absolute orientation was examined in the same set of cadaveric sheep hearts. The obtained results demonstrate that the proposed reconstruction method is reliable and capable of depicting the morphology of

  16. A method for simultaneously delineating multiple targets in 3D-FISH using limited channels, lasers, and fluorochromes.

    PubMed

    Zhao, F Y; Yang, X; Chen, D Y; Ma, W Y; Zheng, J G; Zhang, X M

    2014-01-01

    Many studies have suggested a link between the spatial organization of genomes and fundamental biological processes such as genome reprogramming, gene expression, and differentiation. Multicolor fluorescence in situ hybridization on three-dimensionally preserved nuclei (3D-FISH), in combination with confocal microscopy, has become an effective technique for analyzing 3D genome structure and spatial patterns of defined nucleus targets including entire chromosome territories and single gene loci. This technique usually requires the simultaneous visualization of numerous targets labeled with different colored fluorochromes. Thus, the number of channels and lasers must be sufficient for the commonly used labeling scheme of 3D-FISH, "one probe-one target". However, these channels and lasers are usually restricted by a given microscope system. This paper presents a method for simultaneously delineating multiple targets in 3D-FISH using limited channels, lasers, and fluorochromes. In contrast to other labeling schemes, this method is convenient and simple for multicolor 3D-FISH studies, which may result in widespread adoption of the technique. Lastly, as an application of the method, the nucleus locations of chromosome territory 18/21 and centromere 18/21/13 in normal human lymphocytes were analyzed, which might present evidence of a radial higher order chromatin arrangement.

  17. Do you see what I hear: experiments in multi-channel sound and 3D visualization for network monitoring?

    NASA Astrophysics Data System (ADS)

    Ballora, Mark; Hall, David L.

    2010-04-01

    Detection of intrusions is a continuing problem in network security. Due to the large volumes of data recorded in Web server logs, analysis is typically forensic, taking place only after a problem has occurred. This paper describes a novel method of representing Web log information through multi-channel sound, while simultaneously visualizing network activity using a 3-D immersive environment. We are exploring the detection of intrusion signatures and patterns, utilizing human aural and visual pattern recognition ability to detect intrusions as they occur. IP addresses and return codes are mapped to an informative and unobtrusive listening environment to act as a situational sound track of Web traffic. Web log data is parsed and formatted using Python, then read as a data array by the synthesis language SuperCollider [1], which renders it as a sonification. This can be done either for the study of pre-existing data sets or in monitoring Web traffic in real time. Components rendered aurally include IP address, geographical information, and server Return Codes. Users can interact with the data, speeding or slowing the speed of representation (for pre-existing data sets) or "mixing" sound components to optimize intelligibility for tracking suspicious activity.

  18. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications. PMID:19147888

  19. Three-dimensional (3D) visualization of reflow porosity and modeling of deformation in Pb-free solder joints

    SciTech Connect

    Dudek, M.A.; Hunter, L.; Kranz, S.; Williams, J.J.; Lau, S.H.; Chawla, N.

    2010-04-15

    The volume, size, and dispersion of porosity in solder joints are known to affect mechanical performance and reliability. Most of the techniques used to characterize the three-dimensional (3D) nature of these defects are destructive. With the enhancements in high resolution computed tomography (CT), the detection limits of intrinsic microstructures have been significantly improved. Furthermore, the 3D microstructure of the material can be used in finite element models to understand their effect on microscopic deformation. In this paper we describe a technique utilizing high resolution (< 1 {mu}m) X-ray tomography for the three-dimensional (3D) visualization of pores in Sn-3.9Ag-0.7Cu/Cu joints. The characteristics of reflow porosity, including volume fraction and distribution, were investigated for two reflow profiles. The size and distribution of porosity size were visualized in 3D for four different solder joints. In addition, the 3D virtual microstructure was incorporated into a finite element model to quantify the effect of voids on the lap shear behavior of a solder joint. The presence, size, and location of voids significantly increased the severity of strain localization at the solder/copper interface.

  20. An interactive 3D visualization and manipulation tool for effective assessment of angiogenesis and arteriogenesis using computed tomographic angiography

    NASA Astrophysics Data System (ADS)

    Shen, Li; Gao, Ling; Zhuang, Zhenwu; DeMuinck, Ebo; Huang, Heng; Makedon, Fillia; Pearlman, Justin

    2005-04-01

    This paper presents IVM, an Interactive Vessel Manipulation tool that can help make effective and efficient assessment of angiogenesis and arteriogenesis in computed tomographic angiography (CTA) studies. IVM consists of three fundamental components: (1) a visualization component, (2) a tracing component, and (3) a measurement component. Given a user-specified threshold, IVM can create a 3D surface visualization based on it. Since vessels are thin and tubular structures, using standard isosurface extraction techniques usually cannot yield satisfactory reconstructions. Instead, IVM directly renders the surface of a derived binary 3D image. The image volumes collected in CTA studies often have a relatively high resolution. Thus, compared with more complicated vessel extraction and visualization techniques, rendering the binary image surface has the advantages of being effective, simple and fast. IVM employs a semi-automatic approach to determine the threshold: a user can adjust the threshold by checking the corresponding 3D surface reconstruction and make the choice. Typical tracing software often defines ROIs on 3D image volumes using three orthogonal views. The tracing component in IVM takes one step further: it can perform tracing not only on image slices but also in a 3D view. We observe that directly operating on a 3D view can help a tracer identify ROIs more easily. After setting a threshold and tracing an ROI, a user can use IVM's measurement component to estimate the volume and other parameters of vessels in the ROI. The effectiveness of the IVM tool is demonstrated on rat vessel/bone images collected in a previous CTA study.

  1. 3D visualization of strain in abdominal aortic aneurysms based on navigated ultrasound imaging

    NASA Astrophysics Data System (ADS)

    Brekken, Reidar; Kaspersen, Jon Harald; Tangen, Geir Arne; Dahl, Torbjørn; Hernes, Toril A. N.; Myhre, Hans Olav

    2007-03-01

    The criterion for recommending treatment of an abdominal aortic aneurysm is that the diameter exceeds 50-55 mm or shows a rapid increase. Our hypothesis is that a more accurate prediction of aneurysm rupture is obtained by estimating arterial wall strain from patient specific measurements. Measuring strain in specific parts of the aneurysm reveals differences in load or tissue properties. We have previously presented a method for in vivo estimation of circumferential strain by ultrasound. In the present work, a position sensor attached to the ultrasound probe was used for combining several 2D ultrasound sectors into a 3D model. The ultrasound was registered to a computed-tomography scan (CT), and the strain values were mapped onto a model segmented from these CT data. This gave an intuitive coupling between anatomy and strain, which may benefit both data acquisition and the interpretation of strain. In addition to potentially provide information relevant for assessing the rupture risk of the aneurysm in itself, this model could be used for validating simulations of fluid-structure interactions. Further, the measurements could be integrated with the simulations in order to increase the amount of patient specific information, thus producing a more reliable and accurate model of the biomechanics of the individual aneurysm. This approach makes it possible to extract several parameters potentially relevant for predicting rupture risk, and may therefore extend the basis for clinical decision making.

  2. Visual navigation of the UAVs on the basis of 3D natural landmarks

    NASA Astrophysics Data System (ADS)

    Karpenko, Simon; Konovalenko, Ivan; Miller, Alexander; Miller, Boris; Nikolaev, Dmitry

    2015-12-01

    This work considers the tracking of the UAV (unmanned aviation vehicle) on the basis of onboard observations of natural landmarks including azimuth and elevation angles. It is assumed that UAV's cameras are able to capture the angular position of reference points and to measure the angles of the sight line. Such measurements involve the real position of UAV in implicit form, and therefore some of nonlinear filters such as Extended Kalman filter (EKF) or others must be used in order to implement these measurements for UAV control. Recently it was shown that modified pseudomeasurement method may be used to control UAV on the basis of the observation of reference points assigned along the UAV path in advance. However, the use of such set of points needs the cumbersome recognition procedure with the huge volume of on-board memory. The natural landmarks serving as such reference points which may be determined on-line can significantly reduce the on-board memory and the computational difficulties. The principal difference of this work is the usage of the 3D reference points coordinates which permits to determine the position of the UAV more precisely and thereby to guide along the path with higher accuracy which is extremely important for successful performance of the autonomous missions. The article suggests the new RANSAC for ISOMETRY algorithm and the use of recently developed estimation and control algorithms for tracking of given reference path under external perturbation and noised angular measurements.

  3. Interactive 3D Visualization of the Great Lakes of the World (GLOW) as a Tool to Facilitate Informal Science Education

    NASA Astrophysics Data System (ADS)

    Yikilmaz, M.; Harwood, C. L.; Hsi, S.; Kellogg, L. H.; Kreylos, O.; McDermott, J.; Pellett, B.; Schladow, G.; Segale, H. M.; Yalowitz, S.

    2013-12-01

    Three-dimensional (3D) visualization is a powerful research tool that has been used to investigate complex scientific problems in various fields. It allows researchers to explore and understand processes and features that are not directly observable and help with building of new models. It has been shown that 3D visualization creates a more engaging environment for public audiences. Interactive 3D visualization can allow individuals to explore scientific concepts on their own. We present an NSF funded project developed in collaboration with UC Davis KeckCAVES, UC Davis Tahoe Environmental Research Center, ECHO Lake Aquarium & Science Center, and Lawrence Hall of Science. The Great Lakes of the World (GLOW) project aims to build interactive 3D visualization of some of the major lakes and reservoirs of the world to enhance public awareness and increase understanding and stewardship of freshwater lake ecosystems, habitats, and earth science processes. The project includes a collection of publicly available satellite imagery and digital elevation models at various resolutions for the 20 major lakes of the world as well as the bathymetry data for the 12 lakes. It also includes the vector based 'Global Lakes and Wetlands Database (GLWD)' by the World Wildlife Foundation (WWF) and the Center for Environmental System Research University of Kassel, Germany and the CIA World DataBank II data sets to show wetlands and water reservoirs at global scale. We use a custom virtual globe (Crusta) developed at the UC Davis KeckCAVES. Crusta is designed to specifically allow for visualization and mapping of features in very high spatial resolution (< 1m) and large extent (1000's of km2) raster imagery and topographic data. In addition to imagery, a set of pins, labels and billboards are used to provide textual information about these lakes. Users can interactively learn about the lake and watershed processes as well as geologic processes (e.g. faulting, landslide, glacial, volcanic

  4. Direct in vitro comparison of six 3D positive contrast methods for susceptibility marker imaging

    PubMed Central

    Vonken, Evert-jan P. A.; Schär, Michael; Yu, Jing; Bakker, Chris J. G.; Stuber, Matthias

    2012-01-01

    Purpose To compare different techniques for positive contrast imaging of susceptibility markers with MRI for 3D visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. Materials and methods Six different positive contrast techniques are investigated for their ability to image at 3T a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. Results The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided and strengths and weaknesses of the different approaches are discussed. Conclusion The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data is now available. PMID:23281151

  5. Towards Perceptual Interface for Visualization Navigation of Large Data Sets Using Gesture Recognition with Bezier Curves and Registered 3-D Data

    SciTech Connect

    Shin, M C; Tsap, L V; Goldgof, D B

    2003-03-20

    This paper presents a gesture recognition system for visualization navigation. Scientists are interested in developing interactive settings for exploring large data sets in an intuitive environment. The input consists of registered 3-D data. A geometric method using Bezier curves is used for the trajectory analysis and classification of gestures. The hand gesture speed is incorporated into the algorithm to enable correct recognition from trajectories with variations in hand speed. The method is robust and reliable: correct hand identification rate is 99.9% (from 1641 frames), modes of hand movements are correct 95.6% of the time, recognition rate (given the right mode) is 97.9%. An application to gesture-controlled visualization of 3D bioinformatics data is also presented.

  6. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    PubMed Central

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-01-01

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements. PMID:27657066

  7. 3D visualization of TiO2 nanocrystals in mesoporous nanocomposite using energy filtered transmission electron microscopy tomography.

    PubMed

    Gondo, Takashi; Kasama, Takeshi; Kaneko, Kenji

    2014-11-01

    IntroductionMesoporous silica, SBA-15, is one of the best candidate for the supporting material of catalytic nanoparticles because of its relative large and controllable pore size and large specific surface area [1]. So far, various nanoparticles, such as Au, Pt and Pd, have been introduced into the pore for catalytic application [2]. The size of nanoparticles supported inside SBA-15 is restricted by that of the pore, and they are usually ranging from 2 nm and 50 nm in space.It is necessary to anchor the nanoparticles within pores to avoid segregation / sintering of them. However, it is difficult to anchor them within pores in the case of use of deposition-precipitation method due to extreme low iso-electric point (IEP) of silica (∼2). Therefore, TiO2 nanocrystals (IEP 6-8) were then introduced to anchor AuNPs [3].In this study, EFTEM tomography was applied to examine the effectiveness of TiO2 for AuNPs. Materials and methodAu/TiO2-SBA-15 was embedded into epoxy resin for electron microscopy and microtomed to about 30 nm thickness. EFTEM-tomography was operated at 120 kV and using Ti-L ionization edge via three-window method. Prior to EFTEM, STEM-HAADF tomography was also carried out for visualizing AuNPs and for comparison. Result and discussionFigure 1 shows 3D-volume of AuNPs and TiO2 nanocrystals from EFTEM-tomography. TiO2 nanocrystals in the porous material were successfully visualized using EFTEM -tomography, and local relationship between AuNPs and TiO2 nanocrystals were revealed. A large number of TiO2 nanocrystals were randomly distributed in the SBA-15. It was found that most AuNPs were directly on the exposed TiO2 nanocrystals. It implies that TiO2 nanocrystals were exposed on the surface of the pore and anchored AuNPs inside the pores.jmicro;63/suppl_1/i27/DFU081F1F1DFU081F1Fig. 1.3D volume of AuNPs and TiO2 nanocrystals.

  8. Investigation on reconstruction methods applied to 3D terahertz computed tomography.

    PubMed

    Recur, B; Younus, A; Salort, S; Mounaix, P; Chassagne, B; Desbarats, P; Caumes, J-P; Abraham, E

    2011-03-14

    3D terahertz computed tomography has been performed using a monochromatic millimeter wave imaging system coupled with an infrared temperature sensor. Three different reconstruction methods (standard back-projection algorithm and two iterative analysis) have been compared in order to reconstruct large size 3D objects. The quality (intensity, contrast and geometric preservation) of reconstructed cross-sectional images has been discussed together with the optimization of the number of projections. Final demonstration to real-life 3D objects has been processed to illustrate the potential of the reconstruction methods for applied terahertz tomography.

  9. BRAGI: linking and visualization of database information in a 3D viewer and modeling tool.

    PubMed

    Reichelt, Joachim; Dieterich, Guido; Kvesic, Marsel; Schomburg, Dietmar; Heinz, Dirk W

    2005-04-01

    BRAGI is a well-established package for viewing and modeling of three-dimensional (3D) structures of biological macromolecules. A new version of BRAGI has been developed that is supported on Windows, Linux and SGI. The user interface has been rewritten to give the standard 'look and feel' of the chosen operating system and to provide a more intuitive, easier usage. A large number of new features have been added. Information from public databases such as SWISS-PROT, InterPro, DALI and OMIM can be displayed in the 3D viewer. Structures can be searched for homologous sequences using the NCBI BLAST server.

  10. Application of 3D WebGIS and real-time technique in earthquake information publishing and visualization

    NASA Astrophysics Data System (ADS)

    Li, Boren; Wu, Jianping; Pan, Mao; Huang, Jing

    2015-06-01

    In hazard management, earthquake researchers have utilized GIS to ease the process of managing disasters. Researchers use WebGIS to assess hazards and seismic risk. Although they can provide a visual analysis platform based on GIS technology, they lack a general description in the extensibility of WebGIS for processing dynamic data, especially real-time data. In this paper, we propose a novel approach for real-time 3D visual earthquake information publishing model based on WebGIS and digital globe to improve the ability of processing real-time data in systems based on WebGIS. On the basis of the model, we implement a real-time 3D earthquake information publishing system—EqMap3D. The system can not only publish real-time earthquake information but also display these data and their background geoscience information in a 3D scene. It provides a powerful tool for display, analysis, and decision-making for researchers and administrators. It also facilitates better communication between researchers engaged in geosciences and the interested public.

  11. Development of a compact 3D shape measurement unit using the light-source-stepping method

    NASA Astrophysics Data System (ADS)

    Fujigaki, Motoharu; Sakaguchi, Toshimasa; Murata, Yorinobu

    2016-10-01

    A compact 3D shape measurement unit that uses the light-source-stepping method (LSSM) is developed. The LSSM proposed by the authors is a phase-shifting fringe projection method for shape measurement. The authors also developed a linear LED device for high-speed shape measurement using the LSSM. A compact and high-speed 3D shape measurement unit can be realized using a linear LED device. However, the LSSM is difficult to utilize because the phase-shifting amount is not uniform. The phase-shifting amount depends on the distance from the grating plate. It is therefore necessary to consider carefully the locations of the linear LED device and the grating plate. In this paper, the design method for a 3D shape measurement unit that uses the LSSM is shown, and a prototype of a compact 3D shape measurement unit with a linear LED device is developed.

  12. Observed Human Errors in Interpreting 3D visualizations: implications for Teaching Students how to Comprehend Geological Block Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Pirl, E.; Chiang, J.; Tremaine, M.

    2009-12-01

    Block diagrams are commonly used to communicate three dimensional geological structures and other phenomena relevant to geological science (e.g., water bodies in the ocean). However, several recent studies have suggested that these 3D visualizations create difficulties for individuals with low to moderate spatial abilities. We have therefore initiated a series of studies to understand what it is about the 3D structures that make them so difficult for some people and also to determine if we can improve people’s understanding of these structures through web-based training not related to geology or other underlying information. Our first study examined what mistakes subjects made in a set of 3D block diagrams designed to represent progressively more difficult internal structures. Each block was shown bisected by a plane either perpendicular or at an angle to the block sides. Five low to medium spatial subjects were asked to draw the features that would appear on the bisecting plane. They were asked to talk aloud as they solved the problem. Each session was videotaped. Using the time it took subjects to solve the problems, the subject verbalizations of their problem solving and the drawings that were found to be in error, we have been able to find common patterns in the difficulties the subjects had with the diagrams. We have used these patterns to generate a set of strategies the subjects used in solving the problems. From these strategies, we are developing methods of teaching. A problem found in earlier work on geology structures was not observed in our study, that is, one of subjects failing to recognize the 2D representation of the block as 3D and drawing the cross-section as a combined version of the visible faces of the object. We attribute this to our experiment introduction, suggesting that even this simple training needs to be carried out with students encountering 3D block diagrams. Other problems subjects had included difficulties in perceptually

  13. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  14. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    NASA Astrophysics Data System (ADS)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  15. A novel 3D stitching method for WLI based large range surface topography measurement

    NASA Astrophysics Data System (ADS)

    Lei, Zili; Liu, Xiaojun; Zhao, Li; Chen, Liangzhou; Li, Qian; Yuan, Tengfei; Lu, Wenlong

    2016-01-01

    3D image stitching is an important technique for large range surface topography measurement in White-Light Interferometry (WLI). However, the stitching accuracy is inevitably influenced by noise. To solve this problem, a novel method for 3D image stitching is proposed in this paper. In this method, based on noise mechanism analysis in WLI measurement, a new definition of noise in 3D image is given by an evaluation model for difference between the practical WLI interference signal and the ideal signal. By this new definition, actual noises in 3D image are identified while those practical singular heights on surface will not be wrongly attributed to noise. With the definition, a binary matrix for noise mark corresponding to 3D image is obtained. Then, the matrix is devoted, as an important component, to establish a series of new algorithms of capability for suppressing the adverse effects of noises in each process of the proposed stitching method. By this method, the influence of the noises on stitching is substantially reduced and the stitching accuracy is improved. Through 3D image stitching experiments with noises in WLI, effectiveness of the proposed method is verified.

  16. 3D reconstruction and quantitative assessment method of mitral eccentric regurgitation from color Doppler echocardiography

    NASA Astrophysics Data System (ADS)

    Liu, Qi; Ge, Yi Nan; Wang, Tian Fu; Zheng, Chang Qiong; Zheng, Yi

    2005-10-01

    Based on the two-dimensional color Doppler image in this article, multilane transesophageal rotational scanning method is used to acquire original Doppler echocardiography while echocardiogram is recorded synchronously. After filtering and interpolation, the surface rendering and volume rendering methods are performed. Through analyzing the color-bar information and the color Doppler flow image's superposition principle, the grayscale mitral anatomical structure and color-coded regurgitation velocity parameter were separated from color Doppler flow images, three-dimensional reconstruction of mitral structure and regurgitation velocity distribution was implemented separately, fusion visualization of the reconstructed regurgitation velocity distribution parameter with its corresponding 3D mitral anatomical structures was realized, which can be used in observing the position, phase, direction and measuring the jet length, area, volume, space distribution and severity level of the mitral regurgitation. In addition, in patients with eccentric mitral regurgitation, this new modality overcomes the inherent limitations of two-dimensional color Doppler flow image by depicting the full extent of the jet trajectory, the area of eccentric regurgitation on three-dimensional image was much larger than that on two-dimensional image, the area variation tendency and volume variation tendency of regurgitation have been shown in figure at different angle and different systolic phase. The study shows that three-dimensional color Doppler provides quantitative measurements of eccentric mitral regurgitation that are more accurate and reproducible than conventional color Doppler.

  17. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    PubMed

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  18. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  19. Novel methods for estimating 3D distributions of radioactive isotopes in materials

    NASA Astrophysics Data System (ADS)

    Iwamoto, Y.; Kataoka, J.; Kishimoto, A.; Nishiyama, T.; Taya, T.; Okochi, H.; Ogata, H.; Yamamoto, S.

    2016-09-01

    In recent years, various gamma-ray visualization techniques, or gamma cameras, have been proposed. These techniques are extremely effective for identifying "hot spots" or regions where radioactive isotopes are accumulated. Examples of such would be nuclear-disaster-affected areas such as Fukushima or the vicinity of nuclear reactors. However, the images acquired with a gamma camera do not include distance information between radioactive isotopes and the camera, and hence are "degenerated" in the direction of the isotopes. Moreover, depth information in the images is lost when the isotopes are embedded in materials, such as water, sand, and concrete. Here, we propose two methods of obtaining depth information of radioactive isotopes embedded in materials by comparing (1) their spectra and (2) images of incident gamma rays scattered by the materials and direct gamma rays. In the first method, the spectra of radioactive isotopes and the ratios of scattered to direct gamma rays are obtained. We verify experimentally that the ratio increases with increasing depth, as predicted by simulations. Although the method using energy spectra has been studied for a long time, an advantage of our method is the use of low-energy (50-150 keV) photons as scattered gamma rays. In the second method, the spatial extent of images obtained for direct and scattered gamma rays is compared. By performing detailed Monte Carlo simulations using Geant4, we verify that the spatial extent of the position where gamma rays are scattered increases with increasing depth. To demonstrate this, we are developing various gamma cameras to compare low-energy (scattered) gamma-ray images with fully photo-absorbed gamma-ray images. We also demonstrate that the 3D reconstruction of isotopes/hotspots is possible with our proposed methods. These methods have potential applications in the medical fields, and in severe environments such as the nuclear-disaster-affected areas in Fukushima.

  20. A Novel 3D Building Damage Detection Method Using Multiple Overlapping UAV Images

    NASA Astrophysics Data System (ADS)

    Sui, H.; Tu, J.; Song, Z.; Chen, G.; Li, Q.

    2014-09-01

    In this paper, a novel approach is presented that applies multiple overlapping UAV imagesto building damage detection. Traditional building damage detection method focus on 2D changes detection (i.e., those only in image appearance), whereas the 2D information delivered by the images is often not sufficient and accurate when dealing with building damage detection. Therefore the detection of building damage in 3D feature of scenes is desired. The key idea of 3D building damage detection is the 3D Change Detection using 3D point cloud obtained from aerial images through Structure from motion (SFM) techniques. The approach of building damage detection discussed in this paper not only uses the height changes of 3D feature of scene but also utilizes the image's shape and texture feature. Therefore, this method fully combines the 2D and 3D information of the real world to detect the building damage. The results, tested through field study, demonstrate that this method is feasible and effective in building damage detection. It has also shown that the proposed method is easily applicable and suited well for rapid damage assessment after natural disasters.

  1. Integrated 3-D quality control of geological interpretation through the use of simple methods and programs

    SciTech Connect

    Chatellier, J.Y.; Gustavo, F.; Magaly, Q.

    1996-12-31

    Integrating different petroleum geology disciplines gives insight and help in analyzing data and in checking the quality of different interpretations. Simple approaches and affordable programs allow rapid visualization of data in 3-D. Displaying geological data from stratigraphy, diagenesis, and structural geology together, allows identification of anomalies (i.e. development targets) and often gives clues of the controlling processes. Four case studies from world class fields are used to illustrate the vital need to integrate quality control of interpretation across disciplines. Distribution of diagenetic alterations is revealed by visualizing diagenetic and petrographic data against faults in a 3-D statistical program. Faults are transferred from 3-D seismic into such a program and then analyzed against other data. Fault intersections wrongly correlated are also easily picked. Other powerful tools include a modified use of the Bischke Plots that allow the identification of missing sections previously identified as fault cut-outs. The quality of interpretation has sometimes been assessed from the presence of stacked anomalies of various expression. In other cases repeated unexpected isopach trends revealed subtle faults such as riedels sealing and compartmentizing the reservoirs. Occasionally the timing of fault reactivation was assessed precisely whereas all other techniques failed even to identify these hidden features. Unrecognized porosity-depth trends were identified after filtering data for stratigraphy or sedimentology and studying it in its geographical and tectonic context. Three dimensional visualization was needed in cases of quartz overgrowth where grain size, depth, stratigraphy and location with respect to faults were all important.

  2. Integrated 3-D quality control of geological interpretation through the use of simple methods and programs

    SciTech Connect

    Chatellier, J.Y.; Gustavo, F.; Magaly, Q. )

    1996-01-01

    Integrating different petroleum geology disciplines gives insight and help in analyzing data and in checking the quality of different interpretations. Simple approaches and affordable programs allow rapid visualization of data in 3-D. Displaying geological data from stratigraphy, diagenesis, and structural geology together, allows identification of anomalies (i.e. development targets) and often gives clues of the controlling processes. Four case studies from world class fields are used to illustrate the vital need to integrate quality control of interpretation across disciplines. Distribution of diagenetic alterations is revealed by visualizing diagenetic and petrographic data against faults in a 3-D statistical program. Faults are transferred from 3-D seismic into such a program and then analyzed against other data. Fault intersections wrongly correlated are also easily picked. Other powerful tools include a modified use of the Bischke Plots that allow the identification of missing sections previously identified as fault cut-outs. The quality of interpretation has sometimes been assessed from the presence of stacked anomalies of various expression. In other cases repeated unexpected isopach trends revealed subtle faults such as riedels sealing and compartmentizing the reservoirs. Occasionally the timing of fault reactivation was assessed precisely whereas all other techniques failed even to identify these hidden features. Unrecognized porosity-depth trends were identified after filtering data for stratigraphy or sedimentology and studying it in its geographical and tectonic context. Three dimensional visualization was needed in cases of quartz overgrowth where grain size, depth, stratigraphy and location with respect to faults were all important.

  3. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future.

  4. Accurate compressed look up table method for CGH in 3D holographic display.

    PubMed

    Gao, Chuan; Liu, Juan; Li, Xin; Xue, Gaolei; Jia, Jia; Wang, Yongtian

    2015-12-28

    Computer generated hologram (CGH) should be obtained with high accuracy and high speed in 3D holographic display, and most researches focus on the high speed. In this paper, a simple and effective computation method for CGH is proposed based on Fresnel diffraction theory and look up table. Numerical simulations and optical experiments are performed to demonstrate its feasibility. The proposed method can obtain more accurate reconstructed images with lower memory usage compared with split look up table method and compressed look up table method without sacrificing the computational speed in holograms generation, so it is called accurate compressed look up table method (AC-LUT). It is believed that AC-LUT method is an effective method to calculate the CGH of 3D objects for real-time 3D holographic display where the huge information data is required, and it could provide fast and accurate digital transmission in various dynamic optical fields in the future. PMID:26831987

  5. Novel accurate and scalable 3-D MT forward solver based on a contracting integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, M.; Geraskin, A.; Kuvshinov, A.

    2016-11-01

    We present a novel, open source 3-D MT forward solver based on a method of integral equations (IE) with contracting kernel. Special attention in the solver is paid to accurate calculations of Green's functions and their integrals which are cornerstones of any IE solution. The solver supports massive parallelization and is able to deal with highly detailed and contrasting models. We report results of a 3-D numerical experiment aimed at analyzing the accuracy and scalability of the code.

  6. A multimaterial bioink method for 3D printing tunable, cell-compatible hydrogels.

    PubMed

    Rutz, Alexandra L; Hyland, Kelly E; Jakus, Adam E; Burghardt, Wesley R; Shah, Ramille N

    2015-03-01

    A multimaterial bio-ink method using polyethylene glycol crosslinking is presented for expanding the biomaterial palette required for 3D bioprinting of more mimetic and customizable tissue and organ constructs. Lightly crosslinked, soft hydrogels are produced from precursor solutions of various materials and 3D printed. Rheological and biological characterizations are presented, and the promise of this new bio-ink synthesis strategy is discussed.

  7. Managing Construction Operations Visually: 3-D Techniques for Complex Topography and Restricted Visibility

    ERIC Educational Resources Information Center

    Rodriguez, Walter; Opdenbosh, Augusto; Santamaria, Juan Carlos

    2006-01-01

    Visual information is vital in planning and managing construction operations, particularly, where there is complex terrain topography and salvage operations with limited accessibility and visibility. From visually-assessing site operations and preventing equipment collisions to simulating material handling activities to supervising remotes sites…

  8. 3D visualization of XFEL beam focusing properties using LiF crystal X-ray detector.

    PubMed

    Pikuz, Tatiana; Faenov, Anatoly; Matsuoka, Takeshi; Matsuyama, Satoshi; Yamauchi, Kazuto; Ozaki, Norimasa; Albertazzi, Bruno; Inubushi, Yuichi; Yabashi, Makina; Tono, Kensuke; Sato, Yuya; Yumoto, Hirokatsu; Ohashi, Haruhiko; Pikuz, Sergei; Grum-Grzhimailo, Alexei N; Nishikino, Masaharu; Kawachi, Tetsuya; Ishikawa, Tetsuya; Kodama, Ryosuke

    2015-01-01

    Here, we report, that by means of direct irradiation of lithium fluoride a (LiF) crystal, in situ 3D visualization of the SACLA XFEL focused beam profile along the propagation direction is realized, including propagation inside photoluminescence solid matter. High sensitivity and large dynamic range of the LiF crystal detector allowed measurements of the intensity distribution of the beam at distances far from the best focus as well as near the best focus and evaluation of XFEL source size and beam quality factor M(2). Our measurements also support the theoretical prediction that for X-ray photons with energies ~10 keV the radius of the generated photoelectron cloud within the LiF crystal reaches about 600 nm before thermalization. The proposed method has a spatial resolution ~0.4-2.0 μm for photons with energies 6-14 keV and potentially could be used in a single shot mode for optimization of different focusing systems developed at XFEL and synchrotron facilities. PMID:26634431

  9. 3D visualization of XFEL beam focusing properties using LiF crystal X-ray detector

    PubMed Central

    Pikuz, Tatiana; Faenov, Anatoly; Matsuoka, Takeshi; Matsuyama, Satoshi; Yamauchi, Kazuto; Ozaki, Norimasa; Albertazzi, Bruno; Inubushi, Yuichi; Yabashi, Makina; Tono, Kensuke; Sato, Yuya; Yumoto, Hirokatsu; Ohashi, Haruhiko; Pikuz, Sergei; Grum-Grzhimailo, Alexei N.; Nishikino, Masaharu; Kawachi, Tetsuya; Ishikawa, Tetsuya; Kodama, Ryosuke

    2015-01-01

    Here, we report, that by means of direct irradiation of lithium fluoride a (LiF) crystal, in situ 3D visualization of the SACLA XFEL focused beam profile along the propagation direction is realized, including propagation inside photoluminescence solid matter. High sensitivity and large dynamic range of the LiF crystal detector allowed measurements of the intensity distribution of the beam at distances far from the best focus as well as near the best focus and evaluation of XFEL source size and beam quality factor M2. Our measurements also support the theoretical prediction that for X-ray photons with energies ~10 keV the radius of the generated photoelectron cloud within the LiF crystal reaches about 600 nm before thermalization. The proposed method has a spatial resolution ~ 0.4–2.0 μm for photons with energies 6–14 keV and potentially could be used in a single shot mode for optimization of different focusing systems developed at XFEL and synchrotron facilities. PMID:26634431

  10. A novel method for fabricating curved frequency selective surface via 3D printing technology

    NASA Astrophysics Data System (ADS)

    Liang, Fengchao; Gao, Jinsong

    2014-12-01

    A novel method for fabricating curved frequency selective surfaces with undevelopable curved shape using 3D printing technology was proposed in this paper. First, FSS composed of Y slotted elements that adapt to 0° ~ 70 ° incidences was designed. Then, the 3D model of the curved FSS was created in a 3D modeling software. Next, the 3D model was digitalized into stl format file and then the stl file was inputted into a stereo lithography 3D printer. Next, the prototype of the curved FSS was fabricated by the 3D printer layer by layer. Finally, a 10 μm thick aluminum film was coated on the outer surface of the prototype of the curved FSS by a vacuum coater. The transmission performance of the metallised curved FSS was tested using free space method. It was shown that frequency selection characteristic of the metallised curved FSS reached the requirements of simulation design. The pass-band was in the Ku-band and the transmission rate on center frequency was 63% for nose cone incident direction. This method provides a new way to apply the FSS to arbitrary curved electromagnetic window.

  11. 3D interferometric microscope: color visualization of engineered surfaces for industrial applications

    NASA Astrophysics Data System (ADS)

    Schmit, Joanna; Novak, Matt; Bui, Son

    2015-09-01

    3D microscopes based on white light interference (WLI) provide precise measurement for the topography of engineering surfaces. However, the display of an object in its true colors as observed under white illumination is often desired; this traditionally has presented a challenge for WLI-based microscopes. Such 3D color display is appealing to the eye and great for presentations, and also provides fast evaluation of certain characteristics like defects, delamination, or deposition of different materials. Determination of color as observed by interferometric objectives is not straightforward; we will present how color imaging capabilities similar to an ordinary microscope can be obtained in interference microscopes based on WLI and we will give measurement and imaging examples of a few industrial samples.

  12. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    PubMed Central

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this reg