Sample records for x3d operational visualization

  1. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  2. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  3. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before

  4. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    PubMed

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  5. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  6. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  7. Disentangling the intragroup HI in Compact Groups of galaxies by means of X3D visualization

    NASA Astrophysics Data System (ADS)

    Verdes-Montenegro, Lourdes; Vogt, Frederic; Aubery, Claire; Duret, Laetitie; Garrido, Julián; Sánchez, Susana; Yun, Min S.; Borthakur, Sanchayeeta; Hess, Kelley; Cluver, Michelle; Del Olmo, Ascensión; Perea, Jaime

    2017-03-01

    As an extreme kind of environment, Hickson Compact groups (HCGs) have shown to be very complex systems. HI-VLA observations revealed an intrincated network of HI tails and bridges, tracing pre-processing through extreme tidal interactions. We found HCGs to show a large HI deficiency supporting an evolutionary sequence where gas-rich groups transform via tidal interactions and ISM (interstellar medium) stripping into gas-poor systems. We detected as well a diffuse HI component in the groups, increasing with evolutionary phase, although with uncertain distribution. The complex net of detected HI as observed with the VLA seems hence so puzzling as the missing one. In this talk we revisit the existing VLA information on the HI distribution and kinematics of HCGs by means of X3D visualization. X3D constitutes a powerful tool to extract the most from HI data cubes and a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3-D) diagrams.

  8. Java 3D Interactive Visualization for Astrophysics

    NASA Astrophysics Data System (ADS)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  9. Modeling Airport Ground Operations using Discrete Event Simulation (DES) and X3D Visualization

    DTIC Science & Technology

    2008-03-01

    scenes. It is written in open-source Java and XML using the Netbeans platform, which gave the features of being suitable as standalone applications...and as a plug-in module for the Netbeans integrated development environment (IDE). X3D Graphics is the tool used for the elaboration the creation of...process is shown in Figure 2. To 20 create a new event graph in Viskit, first, Viskit tool must be launched via Netbeans or from the executable

  10. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  11. Development of a 3-D X-ray system

    NASA Astrophysics Data System (ADS)

    Evans, James Paul Owain

    The interpretation of standard two-dimensional x-ray images by humans is often very difficult. This is due to the lack of visual cues to depth in an image which has been produced by transmitted radiation. The solution put forward in this research is to introduce binocular parallax, a powerful physiological depth cue, into the resultant shadowgraph x-ray image. This has been achieved by developing a binocular stereoscopic x-ray imaging technique, which can be used for both visual inspection by human observers and also for the extraction of three-dimensional co-ordinate information. The technique is implemented in the design and development of two experimental x-ray systems and also the development of measurement algorithms. The first experimental machine is based on standard linear x-ray detector arrays and was designed as an optimum configuration for visual inspection by human observers. However, it was felt that a combination of the 3-D visual inspection capability together with a measurement facility would enhance the usefulness of the technique. Therefore, both a theoretical and an empirical analysis of the co-ordinate measurement capability of the machine has been carried out. The measurement is based on close-range photogrammetric techniques. The accuracy of the measurement has been found to be of the order of 4mm in x, 3mm in y and 6mm in z. A second experimental machine was developed and based on the same technique as that used for the first machine. However, a major departure has been the introduction of a dual energy linear x-ray detector array which will allow, in general, the discrimination between organic and inorganic substances. The second design is a compromise between ease of visual inspection for human observers and optimum three-dimensional co-ordinate measurement capability. The system is part of an on going research programme into the possibility of introducing psychological depth cues into the resultant x-ray images. The research presented in

  12. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  13. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  14. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  15. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  16. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  17. EarthServer - 3D Visualization on the Web

    NASA Astrophysics Data System (ADS)

    Wagner, Sebastian; Herzig, Pasquale; Bockholt, Ulrich; Jung, Yvonne; Behr, Johannes

    2013-04-01

    EarthServer (www.earthserver.eu), funded by the European Commission under its Seventh Framework Program, is a project to enable the management, access and exploration of massive, multi-dimensional datasets using Open GeoSpatial Consortium (OGC) query and processing language standards like WCS 2.0 and WCPS. To this end, a server/client architecture designed to handle Petabyte/Exabyte volumes of multi-dimensional data is being developed and deployed. As an important part of the EarthServer project, six Lighthouse Applications, major scientific data exploitation initiatives, are being established to make cross-domain, Earth Sciences related data repositories available in an open and unified manner, as service endpoints based on solutions and infrastructure developed within the project. Clients technology developed and deployed in EarthServer ranges from mobile and web clients to immersive virtual reality systems, all designed to interact with a physically and logically distributed server infrastructure using exclusively OGC standards. In this contribution, we would like to present our work on a web-based 3D visualization and interaction client for Earth Sciences data using only technology found in standard web browsers without requiring the user to install plugins or addons. Additionally, we are able to run the earth data visualization client on a wide range of different platforms with very different soft- and hardware requirements such as smart phones (e.g. iOS, Android), different desktop systems etc. High-quality, hardware-accelerated visualization of 3D and 4D content in standard web browsers can be realized now and we believe it will become more and more common to use this fast, lightweight and ubiquitous platform to provide insights into big datasets without requiring the user to set up a specialized client first. With that in mind, we will also point out some of the limitations we encountered using current web technologies. Underlying the EarthServer web client

  18. Gamma/x-ray linear pushbroom stereo for 3D cargo inspection

    NASA Astrophysics Data System (ADS)

    Zhu, Zhigang; Hu, Yu-Chi

    2006-05-01

    For evaluating the contents of trucks, containers, cargo, and passenger vehicles by a non-intrusive gamma-ray or X-ray imaging system to determine the possible presence of contraband, three-dimensional (3D) measurements could provide more information than 2D measurements. In this paper, a linear pushbroom scanning model is built for such a commonly used gamma-ray or x-ray cargo inspection system. Accurate 3D measurements of the objects inside a cargo can be obtained by using two such scanning systems with different scanning angles to construct a pushbroom stereo system. A simple but robust calibration method is proposed to find the important parameters of the linear pushbroom sensors. Then, a fast and automated stereo matching algorithm based on free-form deformable registration is developed to obtain 3D measurements of the objects under inspection. A user interface is designed for 3D visualization of the objects in interests. Experimental results of sensor calibration, stereo matching, 3D measurements and visualization of a 3D cargo container and the objects inside, are presented.

  19. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  20. 3D PATTERN OF BRAIN ABNORMALITIES IN FRAGILE X SYNDROME VISUALIZED USING TENSOR-BASED MORPHOMETRY

    PubMed Central

    Lee, Agatha D.; Leow, Alex D.; Lu, Allen; Reiss, Allan L.; Hall, Scott; Chiang, Ming-Chang; Toga, Arthur W.; Thompson, Paul M.

    2007-01-01

    Fragile X syndrome (FraX), a genetic neurodevelopmental disorder, results in impaired cognition with particular deficits in executive function and visuo-spatial skills. Here we report the first detailed 3D maps of the effects of the Fragile X mutation on brain structure, using tensor-based morphometry. TBM visualizes structural brain deficits automatically, without time-consuming specification of regions-of-interest. We compared 36 subjects with FraX (age: 14.66+/−1.58SD, 18 females/18 males), and 33 age-matched healthy controls (age: 14.67+/−2.2SD, 17 females/16 males), using high-dimensional elastic image registration. All 69 subjects' 3D T1-weighted brain MRIs were spatially deformed to match a high-resolution single-subject average MRI scan in ICBM space, whose geometry was optimized to produce a minimal deformation target. Maps of the local Jacobian determinant (expansion factor) were computed from the deformation fields. Statistical maps showed increased caudate (10% higher; p=0.001) and lateral ventricle volumes (19% higher; p=0.003), and trend-level parietal and temporal white matter excesses (10% higher locally; p=0.04). In affected females, volume abnormalities correlated with reduction in systemically measured levels of the fragile X mental retardation protein (FMRP; Spearman's r<−0.5 locally). Decreased FMRP correlated with ventricular expansion (p=0.042; permutation test), and anterior cingulate tissue reductions (p=0.0026; permutation test) supporting theories that FMRP is required for normal dendritic pruning in fronto-striatal-limbic pathways. No sex differences were found; findings were confirmed using traditional volumetric measures in regions of interest. Deficit patterns were replicated using Lie group statistics optimized for tensor-valued data. Investigation of how these anomalies emerge over time will accelerate our understanding of FraX and its treatment. PMID:17161622

  1. Interactive 3D Mars Visualization

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  2. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  3. Occupational Survey Report AFSC 3E6X1; Operations Management

    DTIC Science & Technology

    2004-02-01

    Lt Bryan Pickett Feb 04 Occupational Survey Report AFSC 3E6X1 Operations Management I n t e g r i t y - S e r v i c e - E x c e l l e n c e...Report AFSC 3E6X1 Operations Management 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...Nellis AFB NV (5) • Fairchild AFB WA (5) • Hurlburt Field FL (6) • Eglin AFB FL (4) • Ramstein AB (5) Operations Management 3E6X1 February 2004 (Approved

  4. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  5. Atlas and feature based 3D pathway visualization enhancement for skull base pre-operative fast planning from head CT

    NASA Astrophysics Data System (ADS)

    Aghdasi, Nava; Li, Yangming; Berens, Angelique; Moe, Kris S.; Bly, Randall A.; Hannaford, Blake

    2015-03-01

    Minimally invasive neuroendoscopic surgery provides an alternative to open craniotomy for many skull base lesions. These techniques provides a great benefit to the patient through shorter ICU stays, decreased post-operative pain and quicker return to baseline function. However, density of critical neurovascular structures at the skull base makes planning for these procedures highly complex. Furthermore, additional surgical portals are often used to improve visualization and instrument access, which adds to the complexity of pre-operative planning. Surgical approach planning is currently limited and typically involves review of 2D axial, coronal, and sagittal CT and MRI images. In addition, skull base surgeons manually change the visualization effect to review all possible approaches to the target lesion and achieve an optimal surgical plan. This cumbersome process relies heavily on surgeon experience and it does not allow for 3D visualization. In this paper, we describe a rapid pre-operative planning system for skull base surgery using the following two novel concepts: importance-based highlight and mobile portal. With this innovation, critical areas in the 3D CT model are highlighted based on segmentation results. Mobile portals allow surgeons to review multiple potential entry portals in real-time with improved visualization of critical structures located inside the pathway. To achieve this we used the following methods: (1) novel bone-only atlases were manually generated, (2) orbits and the center of the skull serve as features to quickly pre-align the patient's scan with the atlas, (3) deformable registration technique was used for fine alignment, (4) surgical importance was assigned to each voxel according to a surgical dictionary, and (5) pre-defined transfer function was applied to the processed data to highlight important structures. The proposed idea was fully implemented as independent planning software and additional

  6. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    NASA Astrophysics Data System (ADS)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  7. Real-Time 3D Visualization

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  8. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  9. Managing Construction Operations Visually: 3-D Techniques for Complex Topography and Restricted Visibility

    ERIC Educational Resources Information Center

    Rodriguez, Walter; Opdenbosh, Augusto; Santamaria, Juan Carlos

    2006-01-01

    Visual information is vital in planning and managing construction operations, particularly, where there is complex terrain topography and salvage operations with limited accessibility and visibility. From visually-assessing site operations and preventing equipment collisions to simulating material handling activities to supervising remotes sites…

  10. Real-time 3D visualization of volumetric video motion sensor data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlson, J.; Stansfield, S.; Shawver, D.

    1996-11-01

    This paper addresses the problem of improving detection, assessment, and response capabilities of security systems. Our approach combines two state-of-the-art technologies: volumetric video motion detection (VVMD) and virtual reality (VR). This work capitalizes on the ability of VVMD technology to provide three-dimensional (3D) information about the position, shape, and size of intruders within a protected volume. The 3D information is obtained by fusing motion detection data from multiple video sensors. The second component involves the application of VR technology to display information relating to the sensors and the sensor environment. VR technology enables an operator, or security guard, to bemore » immersed in a 3D graphical representation of the remote site. VVMD data is transmitted from the remote site via ordinary telephone lines. There are several benefits to displaying VVMD information in this way. Because the VVMD system provides 3D information and because the sensor environment is a physical 3D space, it seems natural to display this information in 3D. Also, the 3D graphical representation depicts essential details within and around the protected volume in a natural way for human perception. Sensor information can also be more easily interpreted when the operator can `move` through the virtual environment and explore the relationships between the sensor data, objects and other visual cues present in the virtual environment. By exploiting the powerful ability of humans to understand and interpret 3D information, we expect to improve the means for visualizing and interpreting sensor information, allow a human operator to assess a potential threat more quickly and accurately, and enable a more effective response. This paper will detail both the VVMD and VR technologies and will discuss a prototype system based upon their integration.« less

  11. 3D Visualization for Planetary Missions

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  12. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  13. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  14. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  15. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  16. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  17. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  18. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  19. Human microbiome visualization using 3D technology.

    PubMed

    Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C

    2011-01-01

    High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.

  20. PACS-based interface for 3D anatomical structure visualization and surgical planning

    NASA Astrophysics Data System (ADS)

    Koehl, Christophe; Soler, Luc; Marescaux, Jacques

    2002-05-01

    The interpretation of radiological image is routine but it remains a rather difficult task for physicians. It requires complex mental processes, that permit translation from 2D slices into 3D localization and volume determination of visible diseases. An easier and more extensive visualization and exploitation of medical images can be reached through the use of computer-based systems that provide real help from patient admission to post-operative followup. In this way, we have developed a 3D visualization interface linked to a PACS database that allows manipulation and interaction on virtual organs delineated from CT-scan or MRI. This software provides the 3D real-time surface rendering of anatomical structures, an accurate evaluation of volumes and distances and the improvement of radiological image analysis and exam annotation through a negatoscope tool. It also provides a tool for surgical planning allowing the positioning of an interactive laparoscopic instrument and the organ resection. The software system could revolutionize the field of computerized imaging technology. Indeed, it provides a handy and portable tool for pre-operative and intra-operative analysis of anatomy and pathology in various medical fields. This constitutes the first step of the future development of augmented reality and surgical simulation systems.

  1. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  2. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  3. Fusion of CTA and XA data using 3D centerline registration for plaque visualization during coronary intervention

    NASA Astrophysics Data System (ADS)

    Kaila, Gaurav; Kitslaar, Pieter; Tu, Shengxian; Penicka, Martin; Dijkstra, Jouke; Lelieveldt, Boudewijn

    2016-03-01

    Coronary Artery Disease (CAD) results in the buildup of plaque below the intima layer inside the vessel wall of the coronary arteries causing narrowing of the vessel and obstructing blood flow. Percutaneous coronary intervention (PCI) is usually done to enlarge the vessel lumen and regain back normal flow of blood to the heart. During PCI, X-ray imaging is done to assist guide wire movement through the vessels to the area of stenosis. While X-ray imaging allows for good lumen visualization, information on plaque type is unavailable. Also due to the projection nature of the X-ray imaging, additional drawbacks such as foreshortening and overlap of vessels limit the efficacy of the cardiac intervention. Reconstruction of 3D vessel geometry from biplane X-ray acquisitions helps to overcome some of these projection drawbacks. However, the plaque type information remains an issue. In contrast, imaging using computed tomography angiography (CTA) can provide us with information on both lumen and plaque type and allows us to generate a complete 3D coronary vessel tree unaffected by the foreshortening and overlap problems of the X-ray imaging. In this paper, we combine x-ray biplane images with CT angiography to visualize three plaque types (dense calcium, fibrous fatty and necrotic core) on x-ray images. 3D registration using three different registration methods is done between coronary centerlines available from x-ray images and from the CTA volume along with 3D plaque information available from CTA. We compare the different registration methods and evaluate their performance based on 3D root mean squared errors. Two methods are used to project this 3D information onto 2D plane of the x-ray biplane images. Validation of our approach is performed using artificial biplane x-ray datasets.

  4. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  5. Micro-CT images reconstruction and 3D visualization for small animal studying

    NASA Astrophysics Data System (ADS)

    Gong, Hui; Liu, Qian; Zhong, Aijun; Ju, Shan; Fang, Quan; Fang, Zheng

    2005-01-01

    A small-animal x-ray micro computed tomography (micro-CT) system has been constructed to screen laboratory small animals and organs. The micro-CT system consists of dual fiber-optic taper-coupled CCD detectors with a field-of-view of 25x50 mm2, a microfocus x-ray source, a rotational subject holder. For accurate localization of rotation center, coincidence between the axis of rotation and centre of image was studied by calibration with a polymethylmethacrylate cylinder. Feldkamp"s filtered back-projection cone-beam algorithm is adopted for three-dimensional reconstruction on account of the effective corn-beam angle is 5.67° of the micro-CT system. 200x1024x1024 matrix data of micro-CT is obtained with the magnification of 1.77 and pixel size of 31x31μm2. In our reconstruction software, output image size of micro-CT slices data, magnification factor and rotation sample degree can be modified in the condition of different computational efficiency and reconstruction region. The reconstructed image matrix data is processed and visualization by Visualization Toolkit (VTK). Data parallelism of VTK is performed in surface rendering of reconstructed data in order to improve computing speed. Computing time of processing a 512x512x512 matrix datasets is about 1/20 compared with serial program when 30 CPU is used. The voxel size is 54x54x108 μm3. The reconstruction and 3-D visualization images of laboratory rat ear are presented.

  6. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  7. Image segmentation and 3D visualization for MRI mammography

    NASA Astrophysics Data System (ADS)

    Li, Lihua; Chu, Yong; Salem, Angela F.; Clark, Robert A.

    2002-05-01

    MRI mammography has a number of advantages, including the tomographic, and therefore three-dimensional (3-D) nature, of the images. It allows the application of MRI mammography to breasts with dense tissue, post operative scarring, and silicon implants. However, due to the vast quantity of images and subtlety of difference in MR sequence, there is a need for reliable computer diagnosis to reduce the radiologist's workload. The purpose of this work was to develop automatic breast/tissue segmentation and visualization algorithms to aid physicians in detecting and observing abnormalities in breast. Two segmentation algorithms were developed: one for breast segmentation, the other for glandular tissue segmentation. In breast segmentation, the MRI image is first segmented using an adaptive growing clustering method. Two tracing algorithms were then developed to refine the breast air and chest wall boundaries of breast. The glandular tissue segmentation was performed using an adaptive thresholding method, in which the threshold value was spatially adaptive using a sliding window. The 3D visualization of the segmented 2D slices of MRI mammography was implemented under IDL environment. The breast and glandular tissue rendering, slicing and animation were displayed.

  8. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  9. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract

  10. Virtual reality and 3D visualizations in heart surgery education.

    PubMed

    Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver

    2002-01-01

    Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.

  11. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  12. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  13. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  14. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  15. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  16. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  17. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussos, Constantinos C.; Swingler, Jonathan

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  18. A 3D contact analysis approach for the visualization of the electrical contact asperities

    PubMed Central

    Swingler, Jonathan

    2017-01-01

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a ‘‘3D Contact Map’’ and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approach has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation. PMID:28105383

  19. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE PAGES

    Roussos, Constantinos C.; Swingler, Jonathan

    2017-01-11

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  20. Microtomographic images of rat's lumbar vertebra microstructure using 30 keV synchrotron X-rays: an analysis in terms of 3D visualization

    NASA Astrophysics Data System (ADS)

    Rao, D. V.; Takeda, T.; Kawakami, T.; Uesugi, K.; Tsuchiya, Y.; Wu, J.; Lwin, T. T.; Itai, Y.; Zeniya, T.; Yuasa, T.; Akatsuka, T.

    2004-05-01

    Microtomographic images of rat's lumbar vertebra of different age groups varying from 8, 56 and 78 weeks were obtained at 30 keV using synchrotron X-rays with a spatial resolution of 12 μm. The images are analyzed in terms of 3D visualization and micro-architecture. Density histogram of rat's lumbar vertebra is compared with test phantoms. Rat's lumbar volume and phantom volume are studied at different concentrations of hydroxyapatite with slice number. With the use of 2D slices, 3D images are reconstructed, in order to know the evolution and a state of decline of bone microstructure with aging. Cross-sectional μ-CT images shows that the bone of young rat has a fine trabecular microstructure while that of the old rat has large meshed structure.

  1. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  2. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  4. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  5. Thoracic cavity definition for 3D PET/CT analysis and visualization.

    PubMed

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W; Higgins, William E

    2015-07-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical details on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage=99.2% and leakage=0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Thoracic Cavity Definition for 3D PET/CT Analysis and Visualization

    PubMed Central

    Cheirsilp, Ronnarit; Bascom, Rebecca; Allen, Thomas W.; Higgins, William E.

    2015-01-01

    X-ray computed tomography (CT) and positron emission tomography (PET) serve as the standard imaging modalities for lung-cancer management. CT gives anatomical detail on diagnostic regions of interest (ROIs), while PET gives highly specific functional information. During the lung-cancer management process, a patient receives a co-registered whole-body PET/CT scan pair and a dedicated high-resolution chest CT scan. With these data, multimodal PET/CT ROI information can be gleaned to facilitate disease management. Effective image segmentation of the thoracic cavity, however, is needed to focus attention on the central chest. We present an automatic method for thoracic cavity segmentation from 3D CT scans. We then demonstrate how the method facilitates 3D ROI localization and visualization in patient multimodal imaging studies. Our segmentation method draws upon digital topological and morphological operations, active-contour analysis, and key organ landmarks. Using a large patient database, the method showed high agreement to ground-truth regions, with a mean coverage = 99.2% and leakage = 0.52%. Furthermore, it enabled extremely fast computation. For PET/CT lesion analysis, the segmentation method reduced ROI search space by 97.7% for a whole-body scan, or nearly 3 times greater than that achieved by a lung mask. Despite this reduction, we achieved 100% true-positive ROI detection, while also reducing the false-positive (FP) detection rate by >5 times over that achieved with a lung mask. Finally, the method greatly improved PET/CT visualization by eliminating false PET-avid obscurations arising from the heart, bones, and liver. In particular, PET MIP views and fused PET/CT renderings depicted unprecedented clarity of the lesions and neighboring anatomical structures truly relevant to lung-cancer assessment. PMID:25957746

  7. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  8. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm

  9. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  10. "Building" 3D visualization skills in mineralogy

    NASA Astrophysics Data System (ADS)

    Gaudio, S. J.; Ajoku, C. N.; McCarthy, B. S.; Lambart, S.

    2016-12-01

    Studying mineralogy is fundamental for understanding the composition and physical behavior of natural materials in terrestrial and extraterrestrial environments. However, some students struggle and ultimately get discouraged with mineralogy course material because they lack well-developed spatial visualization skills that are needed to deal with three-dimensional (3D) objects, such as crystal forms or atomic-scale structures, typically represented in two-dimensional (2D) space. Fortunately, spatial visualization can improve with practice. Our presentation demonstrates a set of experiential learning activities designed to support the development and improvement of spatial visualization skills in mineralogy using commercially available magnetic building tiles, rods, and spheres. These instructional support activities guide students in the creation of 3D models that replicate macroscopic crystal forms and atomic-scale structures in a low-pressure learning environment and at low cost. Students physically manipulate square and triangularly shaped magnetic tiles to build 3D open and closed crystal forms (platonic solids, prisms, pyramids and pinacoids). Prismatic shapes with different closing forms are used to demonstrate the relationship between crystal faces and Miller Indices. Silica tetrahedra and octahedra are constructed out of magnetic rods (bonds) and spheres (oxygen atoms) to illustrate polymerization, connectivity, and the consequences for mineral formulae. In another activity, students practice the identification of symmetry elements and plane lattice types by laying magnetic rods and spheres over wallpaper patterns. The spatial visualization skills developed and improved through our experiential learning activities are critical to the study of mineralogy and many other geology sub-disciplines. We will also present pre- and post- activity assessments that are aligned with explicit learning outcomes.

  11. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  12. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  13. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  14. 3D investigation of inclusions in diamonds using X-ray micro-tomography

    NASA Astrophysics Data System (ADS)

    Parisatto, M.; Nestola, F.; Artioli, G.; Nimis, P.; Harris, J. W.; Kopylova, M.; Pearson, G. D.

    2012-04-01

    The study of mineral inclusions in diamonds is providing invaluable insights into the geochemistry, geodynamics and geophysics of the Earth's mantle. Over the last two decades, the identification of different inclusion assemblages allowed to recognize diamonds deriving from the deep upper mantle, the transition zone and even the lower mantle. In such research field the in-situ investigation of inclusions using non-destructive techniques is often essential but still remains a challenging task. In particular, conventional 2D imaging techniques (e.g. SEM) are limited to the investigation of surfaces and the lack of access to the third dimension represents a major limitation when trying to extract quantitative information. Another critical aspect is related to sample preparation (cutting, polishing) which is typically very invasive. Nowadays, X-ray computed micro-tomography (X-μCT) allows to overcome such limitations, enabling the internal microstructure of totally undisturbed samples to be visualized in a three-dimensional (3D) manner at the sub-micrometric scale. The final output of a micro-tomography experiment is a greyvalue 3D map of the variations of the X-ray attenuation coefficient (µ) within the studied object. The high X-ray absorption contrast between diamond (almost transparent to X-rays) and the typical inclusion-forming minerals (olivines, garnets, pyroxenes, oxides and sulphides) makes X-μCT a straightforward method for the 3D visualization of inclusions and for the study of their spatial relationships with the diamond host. In this work we applied microfocus X-μCT to investigate silicate inclusions still trapped in diamonds, in order to obtain in-situ information on their exact position, crystal size, shape and X-ray absorption coefficient (which is related to their composition). We selected diamond samples from different deposits containing mainly olivine and garnet inclusions. The investigated samples derived from the Udachnaya pipe (Siberia

  15. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  16. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  17. 3D Visualization of Cooperative Trajectories

    NASA Technical Reports Server (NTRS)

    Schaefer, John A.

    2014-01-01

    Aerodynamicists and biologists have long recognized the benefits of formation flight. When birds or aircraft fly in the upwash region of the vortex generated by leaders in a formation, induced drag is reduced for the trail bird or aircraft, and efficiency improves. The major consequence of this is that fuel consumption can be greatly reduced. When two aircraft are separated by a large enough longitudinal distance, the aircraft are said to be flying in a cooperative trajectory. A simulation has been developed to model autonomous cooperative trajectories of aircraft; however it does not provide any 3D representation of the multi-body system dynamics. The topic of this research is the development of an accurate visualization of the multi-body system observable in a 3D environment. This visualization includes two aircraft (lead and trail), a landscape for a static reference, and simplified models of the vortex dynamics and trajectories at several locations between the aircraft.

  18. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  19. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  20. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as

  1. Characterization of separability and entanglement in (2xD)- and (3xD)-dimensional systems by single-qubit and single-qutrit unitary transformations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giampaolo, Salvatore M.; CNR-INFM Coherentia, Naples; CNISM Unita di Salerno and INFN Sezione di Napoli, Gruppo collegato di Salerno, Baronissi

    2007-10-15

    We investigate the geometric characterization of pure state bipartite entanglement of (2xD)- and (3xD)-dimensional composite quantum systems. To this aim, we analyze the relationship between states and their images under the action of particular classes of local unitary operations. We find that invariance of states under the action of single-qubit and single-qutrit transformations is a necessary and sufficient condition for separability. We demonstrate that in the (2xD)-dimensional case the von Neumann entropy of entanglement is a monotonic function of the minimum squared Euclidean distance between states and their images over the set of single qubit unitary transformations. Moreover, both inmore » the (2xD)- and in the (3xD)-dimensional cases the minimum squared Euclidean distance exactly coincides with the linear entropy [and thus as well with the tangle measure of entanglement in the (2xD)-dimensional case]. These results provide a geometric characterization of entanglement measures originally established in informational frameworks. Consequences and applications of the formalism to quantum critical phenomena in spin systems are discussed.« less

  2. Open source 3D visualization and interaction dedicated to hydrological models

    NASA Astrophysics Data System (ADS)

    Richard, Julien; Giangola-Murzyn, Agathe; Gires, Auguste; Tchiguirinskaia, Ioulia; Schertzer, Daniel

    2014-05-01

    Climate change and surface urbanization strongly modify the hydrological cycle in urban areas, increasing the consequences of extreme events such as floods or draughts. These issues lead to the development of the Multi-Hydro model at the Ecole des Ponts ParisTech (A. Giangola-Murzyn et al., 2012). This fully distributed model allows to compute the hydrological response of urban and peri-urban areas. Unfortunately such models are seldom user friendly. Indeed generating the inputs before launching a new simulation is usually a tricky tasks, and understanding and interpreting the outputs remains specialist tasks not accessible to the wider public. The MH-AssimTool was developed to overcome these issues. To enable an easier and improved understanding of the model outputs, we decided to convert the raw output data (grids file in ascii format) to a 3D display. Some commercial paying models provide a 3D visualization. Because of the cost of their licenses, this kind of tools may not be accessible to the most concerned stakeholders. So, we are developing a new tool based on C++ for the computation, Qt for the graphic user interface, QGIS for the geographical side and OpenGL for the 3D display. All these languages and libraries are open source and multi-platform. We will discuss some preprocessing issues for the data conversion from 2.5D to 3D. Indeed, the GIS data, is considered as a 2.5D (e.i. 2D polygon + one height) and the its transform to 3D display implies a lot of algorithms. For example,to visualize in 3D one building, it is needed to have for each point the coordinates and the elevation according to the topography. Furthermore one have to create new points to represent the walls. Finally the interactions between the model and stakeholders through this new interface and how this helps converting a research tool into a an efficient operational decision tool will be discussed. This ongoing research on the improvement of the visualization methods is supported by the

  3. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer

    PubMed Central

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources. PMID:18974802

  4. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer.

    PubMed

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.

  5. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  6. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    PubMed

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  7. 3D coherent X-ray diffractive imaging of an Individual colloidal crystal grain

    NASA Astrophysics Data System (ADS)

    Shabalin, A.; Meijer, J.-M.; Sprung, M.; Petukhov, A. V.; Vartanyants, I. A.

    Self-assembled colloidal crystals represent an important model system to study nucleation phenomena and solid-solid phase transitions. They are attractive for applications in photonics and sensorics. We present results of a coherent x-ray diffractive imaging experiment performed on a single colloidal crystal grain. The full three-dimensional (3D) reciprocal space map measured by an azimuthal rotational scan contained several orders of Bragg reflections together with the coherent interference signal between them. Applying the iterative phase retrieval approach, the 3D structure of the crystal grain was reconstructed and positions of individual colloidal particles were resolved. We identified an exact stacking sequence of hexagonal close-packed layers including planar and linear defects. Our results open up a breakthrough in applications of coherent x-ray diffraction for visualization of the inner 3D structure of different mesoscopic materials, such as photonic crystals. Present address: University of California - San Diego, USA.

  8. 3D X-Ray Luggage-Screening System

    NASA Technical Reports Server (NTRS)

    Fernandez, Kenneth

    2006-01-01

    A three-dimensional (3D) x-ray luggage- screening system has been proposed to reduce the fatigue experienced by human inspectors and increase their ability to detect weapons and other contraband. The system and variants thereof could supplant thousands of xray scanners now in use at hundreds of airports in the United States and other countries. The device would be applicable to any security checkpoint application where current two-dimensional scanners are in use. A conventional x-ray luggage scanner generates a single two-dimensional (2D) image that conveys no depth information. Therefore, a human inspector must scrutinize the image in an effort to understand ambiguous-appearing objects as they pass by at high speed on a conveyor belt. Such a high level of concentration can induce fatigue, causing the inspector to reduce concentration and vigilance. In addition, because of the lack of depth information, contraband objects could be made more difficult to detect by positioning them near other objects so as to create x-ray images that confuse inspectors. The proposed system would make it unnecessary for a human inspector to interpret 2D images, which show objects at different depths as superimposed. Instead, the system would take advantage of the natural human ability to infer 3D information from stereographic or stereoscopic images. The inspector would be able to perceive two objects at different depths, in a more nearly natural manner, as distinct 3D objects lying at different depths. Hence, the inspector could recognize objects with greater accuracy and less effort. The major components of the proposed system would be similar to those of x-ray luggage scanners now in use. As in a conventional x-ray scanner, there would be an x-ray source. Unlike in a conventional scanner, there would be two x-ray image sensors, denoted the left and right sensors, located at positions along the conveyor that are upstream and downstream, respectively (see figure). X-ray illumination

  9. Non Destructive 3D X-Ray Imaging of Nano Structures & Composites at Sub-30 NM Resolution, With a Novel Lab Based X-Ray Microscope

    DTIC Science & Technology

    2006-11-01

    NON DESTRUCTIVE 3D X-RAY IMAGING OF NANO STRUCTURES & COMPOSITES AT SUB-30 NM RESOLUTION, WITH A NOVEL LAB BASED X- RAY MICROSCOPE S H Lau...article we describe a 3D x-ray microscope based on a laboratory x-ray source operating at 2.7, 5.4 or 8.0 keV hard x-ray energies. X-ray computed...tomography (XCT) is used to obtain detailed 3D structural information inside optically opaque materials with sub-30 nm resolution. Applications include

  10. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  11. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  12. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  13. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  14. Glnemo2: Interactive Visualization 3D Program

    NASA Astrophysics Data System (ADS)

    Lambert, Jean-Charles

    2011-10-01

    Glnemo2 is an interactive 3D visualization program developed in C++ using the OpenGL library and Nokia QT 4.X API. It displays in 3D the particles positions of the different components of an nbody snapshot. It quickly gives a lot of information about the data (shape, density area, formation of structures such as spirals, bars, or peanuts). It allows for in/out zooms, rotations, changes of scale, translations, selection of different groups of particles and plots in different blending colors. It can color particles according to their density or temperature, play with the density threshold, trace orbits, display different time steps, take automatic screenshots to make movies, select particles using the mouse, and fly over a simulation using a given camera path. All these features are accessible from a very intuitive graphic user interface. Glnemo2 supports a wide range of input file formats (Nemo, Gadget 1 and 2, phiGrape, Ramses, list of files, realtime gyrfalcON simulation) which are automatically detected at loading time without user intervention. Glnemo2 uses a plugin mechanism to load the data, so that it is easy to add a new file reader. It's powered by a 3D engine which uses the latest OpenGL technology, such as shaders (glsl), vertex buffer object, frame buffer object, and takes in account the power of the graphic card used in order to accelerate the rendering. With a fast GPU, millions of particles can be rendered in real time. Glnemo2 runs on Linux, Windows (using minGW compiler), and MaxOSX, thanks to the QT4API.

  15. An object oriented fully 3D tomography visual toolkit.

    PubMed

    Agostinelli, S; Paoli, G

    2001-04-01

    In this paper we present a modern object oriented component object model (COMM) C + + toolkit dedicated to fully 3D cone-beam tomography. The toolkit allows the display and visual manipulation of analytical phantoms, projection sets and volumetric data through a standard Windows graphical user interface. Data input/output is performed using proprietary file formats but import/export of industry standard file formats, including raw binary, Windows bitmap and AVI, ACR/NEMA DICOMM 3 and NCSA HDF is available. At the time of writing built-in implemented data manipulators include a basic phantom ray-tracer and a Matrox Genesis frame grabbing facility. A COMM plug-in interface is provided for user-defined custom backprojector algorithms: a simple Feldkamp ActiveX control, including source code, is provided as an example; our fast Feldkamp plug-in is also available.

  16. The advantage of CT scans and 3D visualizations in the analysis of three child mummies from the Graeco-Roman Period.

    PubMed

    Villa, Chiara; Davey, Janet; Craig, Pamela J G; Drummer, Olaf H; Lynnerup, Niels

    2015-01-01

    Three child mummies from the Graeco-Roman Period (332 BCE - c. 395 CE) were examined using CT scans and 3D visualizations generated with Vitrea 2 and MIMICS graphic workstations with the aim of comparing the results with previous X-ray examinations performed by Dawson and Gray in 1968. Although the previous analyses reported that the children had been excerebrated and eviscerated, no evidence of incisions or breaches of the cranial cavity were found; 3D visualizations were generated showing the brain and the internal organs to be in situ. A larger number of skeletal post-mortem damages were identified, such as dislocation of mandible, ribs, and vertebrae, probably suffered at the time of embalming procedure. Different radio-opaque granular particles were observed throughout bodies (internally and externally) and could be explained as presence of natron, used as external desiccating agent by the embalmers, or as adipocerous alteration, a natural alteration of body fat. Age-at-death was estimated using the 3D visualization of the teeth, the state of fusion of the vertebrae and the presence of the secondary centers of the long bones: two mummies died at the age of 4 years ± 12 months, the third one at the age of 6 years ± 24 months. Hyperdontia or polydontia, a dental anomaly, could also be identified in one child using 3D visualizations of the teeth: two supernumerary teeth were found behind the maxillary permanent central incisors which had not been noticed in the Dawson and Gray's X-ray analysis. In conclusion, CT-scan investigations and especially 3D visualizations are important tools in the non-invasive analysis of the mummies and, in this case, provided revised and additional information compared to the only X-ray examination.

  17. Quantitative 3D imaging of yeast by hard X-ray tomography.

    PubMed

    Zheng, Ting; Li, Wenjie; Guan, Yong; Song, Xiangxia; Xiong, Ying; Liu, Gang; Tian, Yangchao

    2012-05-01

    Full-field hard X-ray tomography could be used to obtain three-dimensional (3D) nanoscale structures of biological samples. The image of the fission yeast, Schizosaccharomyces pombe, was clearly visualized based on Zernike phase contrast imaging technique and heavy metal staining method at a spatial resolution better than 50 nm at the energy of 8 keV. The distributions and shapes of the organelles during the cell cycle were clearly visualized and two types of organelle were distinguished. The results for cells during various phases were compared and the ratios of organelle volume to cell volume can be analyzed quantitatively. It showed that the ratios remained constant between growth and division phase and increased strongly in stationary phase, following the shape and size of two types of organelles changes. Our results demonstrated that hard X-ray microscopy was a complementary method for imaging and revealing structural information for biological samples. Copyright © 2011 Wiley Periodicals, Inc.

  18. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  19. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  20. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  1. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  2. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  3. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  4. Employing Virtual Humans for Education and Training in X3D/VRML Worlds

    ERIC Educational Resources Information Center

    Ieronutti, Lucio; Chittaro, Luca

    2007-01-01

    Web-based education and training provides a new paradigm for imparting knowledge; students can access the learning material anytime by operating remotely from any location. Web3D open standards, such as X3D and VRML, support Web-based delivery of Educational Virtual Environments (EVEs). EVEs have a great potential for learning and training…

  5. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  6. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  7. Visualizing planetary data by using 3D engines

    NASA Astrophysics Data System (ADS)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  8. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool

  9. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  10. 3D Geo-Structures Visualization Education Project (3dgeostructuresvis.ucdavis.edu)

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2014-12-01

    Students of field-based geology must master a suite of challenging skills from recognizing rocks, to measuring orientations of features in the field, to finding oneself (and the outcrop) on a map and placing structural information on maps. Students must then synthesize this information to derive meaning from the observations and ultimately to determine the three-dimensional (3D) shape of the deformed structures and their kinematic history. Synthesizing this kind of information requires sophisticated visualizations skills in order to extrapolate observations into the subsurface or missing (eroded) material. The good news is that students can learn 3D visualization skills through practice, and virtual tools can help provide some of that practice. Here I present a suite of learning modules focused at developing students' ability to imagine (visualize) complex 3D structures and their exposure through digital topographic surfaces. Using the software 3DVisualizer, developed by KeckCAVES (keckcaves.org) we have developed visualizations of common geologic structures (e.g., syncline, dipping fold) in which the rock is represented by originally flat-lying layers of sediment, each with a different color, which have been subsequently deformed. The exercises build up in complexity, first focusing on understanding the structure in 3D (penetrative understanding), and then moving to the exposure of the structure at a topographic surface. Individual layers can be rendered as a transparent feature to explore how the layer extends above and below the topographic surface (e.g., to follow an eroded fold limb across a valley). The exercises are provided using either movies of the visualization (which can also be used for examples during lectures), or the data and software can be downloaded to allow for more self-driven exploration and learning. These virtual field models and exercises can be used as "practice runs" before going into the field, as make-up assignments, as a field

  11. Touch Interaction with 3D Geographical Visualization on Web: Selected Technological and User Issues

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.; Stuchlík, R.; Hladík, J.; Kubíček, P.

    2016-10-01

    The use of both 3D visualization and devices with touch displays is increasing. In this paper, we focused on the Web technologies for 3D visualization of spatial data and its interaction via touch screen gestures. At the first stage, we compared the support of touch interaction in selected JavaScript libraries on different hardware (desktop PCs with touch screens, tablets, and smartphones) and software platforms. Afterward, we realized simple empiric test (within-subject design, 6 participants, 2 simple tasks, LCD touch monitor Acer and digital terrain models as stimuli) focusing on the ability of users to solve simple spatial tasks via touch screens. An in-house testing web tool was developed and used based on JavaScript, PHP, and X3DOM languages and Hammer.js libraries. The correctness of answers, speed of users' performances, used gestures, and a simple gesture metric was recorded and analysed. Preliminary results revealed that the pan gesture is most frequently used by test participants and it is also supported by the majority of 3D libraries. Possible gesture metrics and future developments including the interpersonal differences are discussed in the conclusion.

  12. 3D localization of electrophysiology catheters from a single x-ray cone-beam projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robert, Normand, E-mail: normand.robert@sri.utoronto.ca; Polack, George G.; Sethi, Benu

    2015-10-15

    Purpose: X-ray images allow the visualization of percutaneous devices such as catheters in real time but inherently lack depth information. The provision of 3D localization of these devices from cone beam x-ray projections would be advantageous for interventions such as electrophysiology (EP), whereby the operator needs to return a device to the same anatomical locations during the procedure. A method to achieve real-time 3D single view localization (SVL) of an object of known geometry from a single x-ray image is presented. SVL exploits the change in the magnification of an object as its distance from the x-ray source is varied.more » The x-ray projection of an object of interest is compared to a synthetic x-ray projection of a model of said object as its pose is varied. Methods: SVL was tested with a 3 mm spherical marker and an electrophysiology catheter. The effect of x-ray acquisition parameters on SVL was investigated. An independent reference localization method was developed to compare results when imaging a catheter translated via a computer controlled three-axes stage. SVL was also performed on clinical fluoroscopy image sequences. A commercial navigation system was used in some clinical image sequences for comparison. Results: SVL estimates exhibited little change as x-ray acquisition parameters were varied. The reproducibility of catheter position estimates in phantoms denoted by the standard deviations, (σ{sub x}, σ{sub y}, σ{sub z}) = (0.099 mm,  0.093 mm,  2.2 mm), where x and y are parallel to the detector plane and z is the distance from the x-ray source. Position estimates (x, y, z) exhibited a 4% systematic error (underestimation) when compared to the reference method. The authors demonstrated that EP catheters can be tracked in clinical fluoroscopic images. Conclusions: It has been shown that EP catheters can be localized in real time in phantoms and clinical images at fluoroscopic exposure rates. Further work is required to

  13. Amazing Space: Explanations, Investigations, & 3D Visualizations

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  14. 3D Boolean operations in virtual surgical planning.

    PubMed

    Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun

    2017-10-01

    Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.

  15. Projection-slice theorem based 2D-3D registration

    NASA Astrophysics Data System (ADS)

    van der Bom, M. J.; Pluim, J. P. W.; Homan, R.; Timmer, J.; Bartels, L. W.

    2007-03-01

    In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's specific anatomy and the projection images acquired during the procedure by a rotational X-ray source. Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of the patient's anatomy that is directly linked to the X-ray images acquired during the procedure. In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This theorem gives us a relation between the pre-operative 3D data set and the interventional projection images. Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the test projections to the ones that corresponded to the minimal value of the similarity measure. The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when translations are applied to the projection images.

  16. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  17. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  18. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  19. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  20. First principle study of AlX (X=3d, 4d, 5d elements and Lu) dimer.

    PubMed

    Ouyang, Yifang; Wang, Jianchuan; Hou, Yuhua; Zhong, Xiaping; Du, Yong; Feng, Yuanping

    2008-02-21

    The ground state equilibrium bond length, harmonic vibrational frequency, and dissociation energy of AlX (X=3d,4d,5d elements and Lu) dimers are investigated by density functional method B3LYP. The present results are in good agreement with the available experimental and other theoretical values except the dissociation energy of AlCr. The present calculations show that the late transition metal can combine strongly with aluminum compared with the former transition metal. The present calculation also indicates that it is more reasonable to replace La with Lu in the Periodic Table and that the bonding strengths of zinc, cadmium, and mercury with aluminum are very weak.

  1. Volumetric visualization of 3D data

    NASA Technical Reports Server (NTRS)

    Russell, Gregory; Miles, Richard

    1989-01-01

    In recent years, there has been a rapid growth in the ability to obtain detailed data on large complex structures in three dimensions. This development occurred first in the medical field, with CAT (computer aided tomography) scans and now magnetic resonance imaging, and in seismological exploration. With the advances in supercomputing and computational fluid dynamics, and in experimental techniques in fluid dynamics, there is now the ability to produce similar large data fields representing 3D structures and phenomena in these disciplines. These developments have produced a situation in which currently there is access to data which is too complex to be understood using the tools available for data reduction and presentation. Researchers in these areas are becoming limited by their ability to visualize and comprehend the 3D systems they are measuring and simulating.

  2. 3D GIS spatial operation based on extended Euler operators

    NASA Astrophysics Data System (ADS)

    Xu, Hongbo; Lu, Guonian; Sheng, Yehua; Zhou, Liangchen; Guo, Fei; Shang, Zuoyan; Wang, Jing

    2008-10-01

    The implementation of 3 dimensions spatial operations, based on certain data structure, has a lack of universality and is not able to treat with non-manifold cases, at present. ISO/DIS 19107 standard just presents the definition of Boolean operators and set operators for topological relationship query, and OGC GeoXACML gives formal definitions for several set functions without implementation detail. Aiming at these problems, based mathematical foundation on cell complex theory, supported by non-manifold data structure and using relevant research in the field of non-manifold geometry modeling for reference, firstly, this paper according to non-manifold Euler-Poincaré formula constructs 6 extended Euler operators and inverse operators to carry out creating, updating and deleting 3D spatial elements, as well as several pairs of supplementary Euler operators to convenient for implementing advanced functions. Secondly, we change topological element operation sequence of Boolean operation and set operation as well as set functions defined in GeoXACML into combination of extended Euler operators, which separates the upper functions and lower data structure. Lastly, we develop underground 3D GIS prototype system, in which practicability and credibility of extended Euler operators faced to 3D GIS presented by this paper are validated.

  3. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  4. 3D Planetary Data Visualization with CesiumJS

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; DeWolfe, A. W.; Nguyen, D.; Sanchez, F.; Lindholm, D. M.

    2017-12-01

    Complex spacecraft orbits and multi-instrument observations can be challenging to visualize with traditional 2D plots. To facilitate the exploration of planetary science data, we have developed a set of web-based interactive 3D visualizations for the MAVEN and MMS missions using the free CesiumJS library. The Mars Atmospheric and Volatile Evolution (MAVEN) mission has been collecting data at Mars since September 2014. The MAVEN3D project allows playback of one day's orbit at a time, displaying the spacecraft's position and orientation. Selected science data sets can be overplotted on the orbit track, including vectors for magnetic field and ion flow velocities. We also provide an overlay the M-GITM model on the planet itself. MAVEN3D is available at the MAVEN public website at: https://lasp.colorado.edu/maven/sdc/public/pages/maven3d/ The Magnetospheric MultiScale Mission (MMS) consists of one hundred instruments on four spacecraft flying in formation around Earth, investigating the interactions between the solar wind and Earth's magnetic field. While the highest temporal resolution data isn't received and processed until later, continuous daily observations of the particle and field environments are made available as soon as they are received. Traditional `quick-look' static plots have long been the first interaction with data from a mission of this nature. Our new 3D Quicklook viewer allows data from all four spacecraft to be viewed in an interactive web application as soon as the data is ingested into the MMS Science Data Center, less than one day after collection, in order to better help identify scientifically interesting data.

  5. 3D visualization software to analyze topological outcomes of topoisomerase reactions

    PubMed Central

    Darcy, I. K.; Scharein, R. G.; Stasiak, A.

    2008-01-01

    The action of various DNA topoisomerases frequently results in characteristic changes in DNA topology. Important information for understanding mechanistic details of action of these topoisomerases can be provided by investigating the knot types resulting from topoisomerase action on circular DNA forming a particular knot type. Depending on the topological bias of a given topoisomerase reaction, one observes different subsets of knotted products. To establish the character of topological bias, one needs to be aware of all possible topological outcomes of intersegmental passages occurring within a given knot type. However, it is not trivial to systematically enumerate topological outcomes of strand passage from a given knot type. We present here a 3D visualization software (TopoICE-X in KnotPlot) that incorporates topological analysis methods in order to visualize, for example, knots that can be obtained from a given knot by one intersegmental passage. The software has several other options for the topological analysis of mechanisms of action of various topoisomerases. PMID:18440983

  6. A Real-time 3D Visualization of Global MHD Simulation for Space Weather Forecasting

    NASA Astrophysics Data System (ADS)

    Murata, K.; Matsuoka, D.; Kubo, T.; Shimazu, H.; Tanaka, T.; Fujita, S.; Watari, S.; Miyachi, H.; Yamamoto, K.; Kimura, E.; Ishikura, S.

    2006-12-01

    Recently, many satellites for communication networks and scientific observation are launched in the vicinity of the Earth (geo-space). The electromagnetic (EM) environments around the spacecraft are always influenced by the solar wind blowing from the Sun and induced electromagnetic fields. They occasionally cause various troubles or damages, such as electrification and interference, to the spacecraft. It is important to forecast the geo-space EM environment as well as the ground weather forecasting. Owing to the recent remarkable progresses of super-computer technologies, numerical simulations have become powerful research methods in the solar-terrestrial physics. For the necessity of space weather forecasting, NICT (National Institute of Information and Communications Technology) has developed a real-time global MHD simulation system of solar wind-magnetosphere-ionosphere couplings, which has been performed on a super-computer SX-6. The real-time solar wind parameters from the ACE spacecraft at every one minute are adopted as boundary conditions for the simulation. Simulation results (2-D plots) are updated every 1 minute on a NICT website. However, 3D visualization of simulation results is indispensable to forecast space weather more accurately. In the present study, we develop a real-time 3D webcite for the global MHD simulations. The 3-D visualization results of simulation results are updated every 20 minutes in the following three formats: (1)Streamlines of magnetic field lines, (2)Isosurface of temperature in the magnetosphere and (3)Isoline of conductivity and orthogonal plane of potential in the ionosphere. For the present study, we developed a 3-D viewer application working on Internet Explorer browser (ActiveX) is implemented, which was developed on the AVS/Express. Numerical data are saved in the HDF5 format data files every 1 minute. Users can easily search, retrieve and plot past simulation results (3D visualization data and numerical data) by using

  7. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  8. 3D imaging of a rice pollen grain using transmission X-ray microscopy.

    PubMed

    Wang, Shengxiang; Wang, Dajiang; Wu, Qiao; Gao, Kun; Wang, Zhili; Wu, Ziyu

    2015-07-01

    For the first time, the three-dimensional (3D) ultrastructure of an intact rice pollen cell has been obtained using a full-field transmission hard X-ray microscope operated in Zernike phase contrast mode. After reconstruction and segmentation from a series of projection images, complete 3D structural information of a 35 µm rice pollen grain is presented at a resolution of ∼100 nm. The reconstruction allows a clear differentiation of various subcellular structures within the rice pollen grain, including aperture, lipid body, mitochondrion, nucleus and vacuole. Furthermore, quantitative information was obtained about the distribution of cytoplasmic organelles and the volume percentage of each kind of organelle. These results demonstrate that transmission X-ray microscopy can be quite powerful for non-destructive investigation of 3D structures of whole eukaryotic cells.

  9. Web-based interactive 2D/3D medical image processing and visualization software.

    PubMed

    Mahmoudi, Seyyed Ehsan; Akhondi-Asl, Alireza; Rahmani, Roohollah; Faghih-Roohi, Shahrooz; Taimouri, Vahid; Sabouri, Ahmad; Soltanian-Zadeh, Hamid

    2010-05-01

    There are many medical image processing software tools available for research and diagnosis purposes. However, most of these tools are available only as local applications. This limits the accessibility of the software to a specific machine, and thus the data and processing power of that application are not available to other workstations. Further, there are operating system and processing power limitations which prevent such applications from running on every type of workstation. By developing web-based tools, it is possible for users to access the medical image processing functionalities wherever the internet is available. In this paper, we introduce a pure web-based, interactive, extendable, 2D and 3D medical image processing and visualization application that requires no client installation. Our software uses a four-layered design consisting of an algorithm layer, web-user-interface layer, server communication layer, and wrapper layer. To compete with extendibility of the current local medical image processing software, each layer is highly independent of other layers. A wide range of medical image preprocessing, registration, and segmentation methods are implemented using open source libraries. Desktop-like user interaction is provided by using AJAX technology in the web-user-interface. For the visualization functionality of the software, the VRML standard is used to provide 3D features over the web. Integration of these technologies has allowed implementation of our purely web-based software with high functionality without requiring powerful computational resources in the client side. The user-interface is designed such that the users can select appropriate parameters for practical research and clinical studies. Copyright (c) 2009 Elsevier Ireland Ltd. All rights reserved.

  10. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  11. Effectiveness of the 3D Monitor System for Medical Education During Neurosurgical Operation.

    PubMed

    Wanibuchi, Masahiko; Komatsu, Katsuya; Akiyama, Yukinori; Mikami, Takeshi; Mikuni, Nobuhiro

    2018-01-01

    Three-dimensional (3D) graphics are used in the medical field, especially during surgery. Although 3D monitoring is useful for medical education, its effectiveness needs to be objectively evaluated. The aim of this study was to investigate the efficacy of 3D monitoring in the surgical education of medical students. A questionnaire on high-definition 3D monitoring was given to fifth-year medical students in a 6-year program. Sixty-four students wore polarized glasses and observed a microsurgical operation through a 3D monitor. The questionnaire contained questions on stereopsis, neurosurgical interest, visual impact, comprehension of surgical anatomy and procedures, optical sharpness, active learning enhancement, and eye exhaustion. These parameters were evaluated on a 5-point scale that spanned negative and positive scores. The average score of each parameter ranged from 3.13 to 3.78, except for eye exhaustion, which was 0.88. The items for which the students reported positive perceptions (scores of 4 or 5) were stereopsis (67.2% of students), neurosurgical interest (62.5%), visual impact and optical sharpness (60.9% for both), active learning enhancement (57.8%), and comprehension of surgical anatomy (50.0%) and procedures (42.2%). By contrast, only eye exhaustion was evaluated negatively (26.6%). The use of 3D monitoring systems in medical education offers the advantage of stereopsis and contributes to surgical training. However, improvements are required to decrease eye exhaustion. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  13. Visualization of Hyperconjugation and Subsequent Structural Distortions through 3D Printing of Crystal Structures.

    PubMed

    Mithila, Farha J; Oyola-Reynoso, Stephanie; Thuo, Martin M; Atkinson, Manza Bj

    2016-01-01

    Structural distortions due to hyperconjugation in organic molecules, like norbornenes, are well captured through X-ray crystallographic data, but are sometimes difficult to visualize especially for those applying chemical knowledge and are not chemists. Crystal structure from the Cambridge database were downloaded and converted to .stl format. The structures were then printed at the desired scale using a 3D printer. Replicas of the crystal structures were accurately reproduced in scale and any resulting distortions were clearly visible from the macroscale models. Through space interactions or effect of through space hyperconjugation was illustrated through loss of symmetry or distortions thereof. The norbornene structures exhibits distortion that cannot be observed through conventional ball and stick modelling kits. We show that 3D printed models derived from crystallographic data capture even subtle distortions in molecules. We translate such crystallographic data into scaled-up models through 3D printing.

  14. Double valley Dirac fermions for 3D and 2D Hg1-x Cd x Te with strong asymmetry

    NASA Astrophysics Data System (ADS)

    Marchewka, M.

    2017-04-01

    In this paper the possibility to bring about the double-valley Dirac fermions in some quantum structures is predicted. These quantum structures are: strained 3D Hg1-x Cd x Te topological insulator (TI) with strong interface inversion asymmetry and the asymmetric Hg1-x Cd x Te double quantum wells (DQW). The numerical analysis of the dispersion relation for 3D TI Hg1-x Cd x Te for the proper Cd (x)-content of the Hg1-x Cd x Te compound clearly shows that the inversion symmetry breaking together with the unaxial tensile strain causes the splitting of each of the Dirac nodes (two belonging to two interfaces) into two in the proximity of the Γ-point. Similar effects can be obtained for asymmetric Hg1-x Cd x Te DQW with the proper content of Cd and proper width of the quantum wells. The aim of this work is to explore the inversion symmetry breaking in 3D TI and 2D DQW mixed HgCdTe systems. It is shown that this symmetry breaking leads to the dependence of carriers energy on quasi-momentum similar to that of Weyl fermions.

  15. Interactive client side data visualization with d3.js

    NASA Astrophysics Data System (ADS)

    Rodzianko, A.; Versteeg, R.; Johnson, D. V.; Soltanian, M. R.; Versteeg, O. J.; Girouard, M.

    2015-12-01

    Geoscience data associated with near surface research and operational sites is increasingly voluminous and heterogeneous (both in terms of providers and data types - e.g. geochemical, hydrological, geophysical, modeling data, of varying spatiotemporal characteristics). Such data allows scientists to investigate fundamental hydrological and geochemical processes relevant to agriculture, water resources and climate change. For scientists to easily share, model and interpret such data requires novel tools with capabilities for interactive data visualization. Under sponsorship of the US Department of Energy, Subsurface Insights is developing the Predictive Assimilative Framework (PAF): a cloud based subsurface monitoring platform which can manage, process and visualize large heterogeneous datasets. Over the last year we transitioned our visualization method from a server side approach (in which images and animations were generated using Jfreechart and Visit) to a client side one that utilizes the D3 Javascript library. Datasets are retrieved using web service calls to the server, returned as JSON objects and visualized within the browser. Users can interactively explore primary and secondary datasets from various field locations. Our current capabilities include interactive data contouring and heterogeneous time series data visualization. While this approach is very powerful and not necessarily unique, special attention needs to be paid to latency and responsiveness issues as well as to issues as cross browser code compatibility so that users have an identical, fluid and frustration-free experience across different computational platforms. We gratefully acknowledge support from the US Department of Energy under SBIR Award DOE DE-SC0009732, the use of data from the Lawrence Berkeley National Laboratory (LBNL) Sustainable Systems SFA Rifle field site and collaboration with LBNL SFA scientists.

  16. 3D deblending of simultaneous source data based on 3D multi-scale shaping operator

    NASA Astrophysics Data System (ADS)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Gong, Fei; Huang, Weilin

    2018-04-01

    We propose an iterative three-dimensional (3D) deblending scheme using 3D multi-scale shaping operator to separate 3D simultaneous source data. The proposed scheme is based on the property that signal is coherent, whereas interference is incoherent in some domains, e.g., common receiver domain and common midpoint domain. In two-dimensional (2D) blended record, the coherency difference of signal and interference is in only one spatial direction. Compared with 2D deblending, the 3D deblending can take more sparse constraints into consideration to obtain better performance, e.g., in 3D common receiver gather, the coherency difference is in two spatial directions. Furthermore, with different levels of coherency, signal and interference distribute in different scale curvelet domains. In both 2D and 3D blended records, most coherent signal locates in coarse scale curvelet domain, while most incoherent interference distributes in fine scale curvelet domain. The scale difference is larger in 3D deblending, thus, we apply the multi-scale shaping scheme to further improve the 3D deblending performance. We evaluate the performance of 3D and 2D deblending with the multi-scale and global shaping operators, respectively. One synthetic and one field data examples demonstrate the advantage of the 3D deblending with 3D multi-scale shaping operator.

  17. 3D printing of preclinical X-ray computed tomographic data sets.

    PubMed

    Doney, Evan; Krumdick, Lauren A; Diener, Justin M; Wathen, Connor A; Chapman, Sarah E; Stamile, Brian; Scott, Jeremiah E; Ravosa, Matthew J; Van Avermaete, Tony; Leevy, W Matthew

    2013-03-22

    Three-dimensional printing allows for the production of highly detailed objects through a process known as additive manufacturing. Traditional, mold-injection methods to create models or parts have several limitations, the most important of which is a difficulty in making highly complex products in a timely, cost-effective manner.(1) However, gradual improvements in three-dimensional printing technology have resulted in both high-end and economy instruments that are now available for the facile production of customized models.(2) These printers have the ability to extrude high-resolution objects with enough detail to accurately represent in vivo images generated from a preclinical X-ray CT scanner. With proper data collection, surface rendering, and stereolithographic editing, it is now possible and inexpensive to rapidly produce detailed skeletal and soft tissue structures from X-ray CT data. Even in the early stages of development, the anatomical models produced by three-dimensional printing appeal to both educators and researchers who can utilize the technology to improve visualization proficiency. (3, 4) The real benefits of this method result from the tangible experience a researcher can have with data that cannot be adequately conveyed through a computer screen. The translation of pre-clinical 3D data to a physical object that is an exact copy of the test subject is a powerful tool for visualization and communication, especially for relating imaging research to students, or those in other fields. Here, we provide a detailed method for printing plastic models of bone and organ structures derived from X-ray CT scans utilizing an Albira X-ray CT system in conjunction with PMOD, ImageJ, Meshlab, Netfabb, and ReplicatorG software packages.

  18. Mental practice with interactive 3D visual aids enhances surgical performance.

    PubMed

    Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo

    2017-10-01

    Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.

  19. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  20. A generalized 3D framework for visualization of planetary data.

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.

    2016-12-01

    As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.

  1. Comparative case study between D3 and highcharts on lustre data visualization

    NASA Astrophysics Data System (ADS)

    ElTayeby, Omar; John, Dwayne; Patel, Pragnesh; Simmerman, Scott

    2013-12-01

    One of the challenging tasks in visual analytics is to target clustered time-series data sets, since it is important for data analysts to discover patterns changing over time while keeping their focus on particular subsets. In order to leverage the humans ability to quickly visually perceive these patterns, multivariate features should be implemented according to the attributes available. However, a comparative case study has been done using JavaScript libraries to demonstrate the differences in capabilities of using them. A web-based application to monitor the Lustre file system for the systems administrators and the operation teams has been developed using D3 and Highcharts. Lustre file systems are responsible of managing Remote Procedure Calls (RPCs) which include input output (I/O) requests between clients and Object Storage Targets (OSTs). The objective of this application is to provide time-series visuals of these calls and storage patterns of users on Kraken, a University of Tennessee High Performance Computing (HPC) resource in Oak Ridge National Laboratory (ORNL).

  2. Interactive 3D visualization speeds well, reservoir planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petzet, G.A.

    1997-11-24

    Texaco Exploration and Production has begun making expeditious analyses and drilling decisions that result from interactive, large screen visualization of seismic and other three dimensional data. A pumpkin shaped room or pod inside a 3,500 sq ft, state-of-the-art facility in Southwest Houston houses a supercomputer and projection equipment Texaco said will help its people sharply reduce 3D seismic project cycle time, boost production from existing fields, and find more reserves. Oil and gas related applications of the visualization center include reservoir engineering, plant walkthrough simulation for facilities/piping design, and new field exploration. The center houses a Silicon Graphics Onyx2 infinitemore » reality supercomputer configured with 8 processors, 3 graphics pipelines, and 6 gigabytes of main memory.« less

  3. 3D Visualization as a Communicative Aid in Pharmaceutical Advice-Giving over Distance

    PubMed Central

    Dahlbäck, Nils; Petersson, Göran Ingemar

    2011-01-01

    Background Medication misuse results in considerable problems for both patient and society. It is a complex problem with many contributing factors, including timely access to product information. Objective To investigate the value of 3-dimensional (3D) visualization paired with video conferencing as a tool for pharmaceutical advice over distance in terms of accessibility and ease of use for the advice seeker. Methods We created a Web-based communication service called AssistancePlus that allows an advisor to demonstrate the physical handling of a complex pharmaceutical product to an advice seeker with the aid of 3D visualization and audio/video conferencing. AssistancePlus was tested in 2 separate user studies performed in a usability lab, under realistic settings and emulating a real usage situation. In the first study, 10 pharmacy students were assisted by 2 advisors from the Swedish National Co-operation of Pharmacies’ call centre on the use of an asthma inhaler. The student-advisor interview sessions were filmed on video to qualitatively explore their experience of giving and receiving advice with the aid of 3D visualization. In the second study, 3 advisors from the same call centre instructed 23 participants recruited from the general public on the use of 2 products: (1) an insulin injection pen, and (2) a growth hormone injection syringe. First, participants received advice on one product in an audio-recorded telephone call and for the other product in a video-recorded AssistancePlus session (product order balanced). In conjunction with the AssistancePlus session, participants answered a questionnaire regarding accessibility, perceived expressiveness, and general usefulness of 3D visualization for advice-giving over distance compared with the telephone and were given a short interview focusing on their experience of the 3D features. Results In both studies, participants found the AssistancePlus service helpful in providing clear and exact instructions. In

  4. Vibrational and rotational excitation effects of the N(2D) + D2(X1Σg +) → ND(X3Σ+) + D(2S) reaction

    NASA Astrophysics Data System (ADS)

    Zhu, Ziliang; Wang, Haijie; Wang, Xiquan; Shi, Yanying

    2018-05-01

    The effects of the rovibrational excitation of reactants in the N(2D) + D2(X1Σg+) → ND(X3Σ+) + D(2S) reaction are calculated in a collision energy range from the threshold to 1.0 eV using the time-dependent wave packet approach and a second-order split operator. The reaction probability, integral cross-section, differential cross-section and rate constant of the title reaction are calculated. The integral cross-section and rate constant of the initial states v = 0, j = 0, 1, are in good agreement with experimental data available in the literature. The rotational excitation of the D2 molecule has little effect on reaction probability, integral cross-section and the rate constant, but it increased the sideways and forward scattering signals. The vibrational excitation of the D2 molecule reduced the threshold and broke up the forward-backward symmetry of the differential cross-section; it also increased the forward scattering signals. This may be because the vibrational excitation of the D2 molecule reduced the lifetime of the intermediate complex.

  5. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    PubMed

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied

  6. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial

  7. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  8. Visualization of 3-D tensor fields

    NASA Technical Reports Server (NTRS)

    Hesselink, L.

    1996-01-01

    Second-order tensor fields have applications in many different areas of physics, such as general relativity and fluid mechanics. The wealth of multivariate information in tensor fields makes them more complex and abstract than scalar and vector fields. Visualization is a good technique for scientists to gain new insights from them. Visualizing a 3-D continuous tensor field is equivalent to simultaneously visualizing its three eigenvector fields. In the past, research has been conducted in the area of two-dimensional tensor fields. It was shown that degenerate points, defined as points where eigenvalues are equal to each other, are the basic singularities underlying the topology of tensor fields. Moreover, it was shown that eigenvectors never cross each other except at degenerate points. Since we live in a three-dimensional world, it is important for us to understand the underlying physics of this world. In this report, we describe a new method for locating degenerate points along with the conditions for classifying them in three-dimensional space. Finally, we discuss some topological features of three-dimensional tensor fields, and interpret topological patterns in terms of physical properties.

  9. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  10. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L 2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  11. Thyroid gland visualization with 3D/4D ultrasound: integrated hands-on imaging in anatomical dissection laboratory.

    PubMed

    Carter, John L; Patel, Ankura; Hocum, Gabriel; Benninger, Brion

    2017-05-01

    In teaching anatomy, clinical imaging has been utilized to supplement the traditional dissection laboratory promoting education through visualization of spatial relationships of anatomical structures. Viewing the thyroid gland using 3D/4D ultrasound can be valuable to physicians as well as students learning anatomy. The objective of this study was to investigate the perceptions of first-year medical students regarding the integration of 3D/4D ultrasound visualization of spatial anatomy during anatomical education. 108 first-year medical students were introduced to 3D/4D ultrasound imaging of the thyroid gland through a detailed 20-min tutorial taught in small group format. Students then practiced 3D/4D ultrasound imaging on volunteers and donor cadavers before assessment through acquisition and identification of thyroid gland on at least three instructor-verified images. A post-training survey was administered assessing student impression. All students visualized the thyroid gland using 3D/4D ultrasound. Students revealed 88.0% strongly agreed or agreed 3D/4D ultrasound is useful revealing the thyroid gland and surrounding structures and 87.0% rated the experience "Very Easy" or "Easy", demonstrating benefits and ease of use including 3D/4D ultrasound in anatomy courses. When asked, students felt 3D/4D ultrasound is useful in teaching the structure and surrounding anatomy of the thyroid gland, they overwhelmingly responded "Strongly Agree" or "Agree" (90.2%). This study revealed that 3D/4D ultrasound was successfully used and preferred over 2D ultrasound by medical students during anatomy dissection courses to accurately identify the thyroid gland. In addition, 3D/4D ultrasound may nurture and further reinforce stereostructural spatial relationships of the thyroid gland taught during anatomy dissection.

  12. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  13. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  14. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  15. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  16. Methodological considerations for the 3D measurement of the X-factor and lower trunk movement in golf.

    PubMed

    Joyce, Christopher; Burnett, Angus; Ball, Kevin

    2010-09-01

    It is believed that increasing the X-factor (movement of the shoulders relative to the hips) during the golf swing can increase ball velocity at impact. Increasing the X-factor may also increase the risk of low back pain. The aim of this study was to provide recommendations for the three-dimensional (3D) measurement of the X-factor and lower trunk movement during the golf swing. This three-part validation study involved; (1) developing and validating models and related algorithms (2) comparing 3D data obtained during static positions representative of the golf swing to visual estimates and (3) comparing 3D data obtained during dynamic golf swings to images gained from high-speed video. Of particular interest were issues related to sequence dependency. After models and algorithms were validated, results from parts two and three of the study supported the conclusion that a lateral bending/flexion-extension/axial rotation (ZYX) order of rotation was deemed to be the most suitable Cardanic sequence to use in the assessment of the X-factor and lower trunk movement in the golf swing. The findings of this study have relevance for further research examining the X-factor its relationship to club head speed and lower trunk movement and low back pain in golf.

  17. GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.

    PubMed

    Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L

    2015-10-15

    A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.

  18. 3D visualization of subcellular structures of Schizosaccharomyces pombe by hard X-ray tomography.

    PubMed

    Yang, Y; Li, W; Liu, G; Zhang, X; Chen, J; Wu, W; Guan, Y; Xiong, Y; Tian, Y; Wu, Z

    2010-10-01

    Cellular structures of the fission yeast, Schizosaccharomyces pombe, were examined by using hard X-ray tomography. Since cells are nearly transparent to hard X-rays, Zernike phase contrast and heavy metal staining were introduced to improve image contrast. Through using such methods, images taken at 8 keV displayed sufficient contrast for observing cellular structures. The cell wall, the intracellular organelles and the entire structural organization of the whole cells were visualized in three-dimensional at a resolution better than 100 nm. Comparison between phase contrast and absorption contrast was also made, indicating the obvious advantage of phase contrast for cellular imaging at this energy. Our results demonstrate that hard X-ray tomography with Zernike phase contrast is suitable for cellular imaging. Its unique abilities make it have potential to become a useful tool for revealing structural information from cells, especially thick eukaryotic cells. © 2010 The Authors Journal compilation © 2010 The Royal Microscopical Society.

  19. 3-D vision and figure-ground separation by visual cortex.

    PubMed

    Grossberg, S

    1994-01-01

    A neural network theory of three-dimensional (3-D) vision, called FACADE theory, is described. The theory proposes a solution of the classical figure-ground problem for biological vision. It does so by suggesting how boundary representations and surface representations are formed within a boundary contour system (BCS) and a feature contour system (FCS). The BCS and FCS interact reciprocally to form 3-D boundary and surface representations that are mutually consistent. Their interactions generate 3-D percepts wherein occluding and occluded object parts are separated, completed, and grouped. The theory clarifies how preattentive processes of 3-D perception and figure-ground separation interact reciprocally with attentive processes of spatial localization, object recognition, and visual search. A new theory of stereopsis is proposed that predicts how cells sensitive to multiple spatial frequencies, disparities, and orientations are combined by context-sensitive filtering, competition, and cooperation to form coherent BCS boundary segmentations. Several factors contribute to figure-ground pop-out, including: boundary contrast between spatially contiguous boundaries, whether due to scenic differences in luminance, color, spatial frequency, or disparity; partially ordered interactions from larger spatial scales and disparities to smaller scales and disparities; and surface filling-in restricted to regions surrounded by a connected boundary. Phenomena such as 3-D pop-out from a 2-D picture, Da Vinci stereopsis, 3-D neon color spreading, completion of partially occluded objects, and figure-ground reversals are analyzed. The BCS and FCS subsystems model aspects of how the two parvocellular cortical processing streams that join the lateral geniculate nucleus to prestriate cortical area V4 interact to generate a multiplexed representation of Form-And-Color-And-DEpth, or FACADE, within area V4. Area V4 is suggested to support figure-ground separation and to interact with

  20. [3D-visualization by MRI for surgical planning of Wilms tumors].

    PubMed

    Schenk, J P; Waag, K-L; Graf, N; Wunsch, R; Jourdan, C; Behnisch, W; Tröger, J; Günther, P

    2004-10-01

    To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4 - 6 mm slices. Additionally, a phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected.

  1. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  2. Parametric study of the 5d3, 5d2 6 s and 5d2 6 p configurations in the Lu I isoelectronic sequence (Ta III-Hg X) using orthogonal operators

    NASA Astrophysics Data System (ADS)

    Azarov, Vladimir I.

    2018-01-01

    Data available on the 5d3, 5d26s and 5d26p configurations in the Lu I isoelectronic sequence have been critically reviewed by means of calculations with the orthogonal operators. The study included spectra from Ta III through Hg X. The calculations agree very well with the experimental data. The isoelectronic behavior of parameters and deviations of the experimental levels from the calculated positions, ΔE = (Eexp -Ecalc), show regular trends. Three missing 5d26s levels have been accurately predicted theoretically and confirmed experimentally: the level (3P)2P3/2 in Pt VIII and the levels (3P)4P5/2 and (3P)2P1/2 in Os VI have been determined in the study. The research suggested revision of the published initial analyses of the Re V and Hg X spectra. The recently completed revised analysis of Re V has confirmed the issues noticed in the initial analysis and has resulted in the data that fit very well in the current parametric study. The isoelectronic evolution of the higher order interactions was studied for the first time in the Lu I sequence. The study included the parameters Ac, A3-A6 describing two-particle magnetic interaction of the dd-type, the parameter Amso describing two-particle magnetic ds-type effect, the parameter Tdds describing 3-particle electrostatic ds-type interaction, and the effective parameters S1 and S2 of the dp-type.

  3. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  4. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  5. Development of visual 3D virtual environment for control software

    NASA Technical Reports Server (NTRS)

    Hirose, Michitaka; Myoi, Takeshi; Amari, Haruo; Inamura, Kohei; Stark, Lawrence

    1991-01-01

    Virtual environments for software visualization may enable complex programs to be created and maintained. A typical application might be for control of regional electric power systems. As these encompass broader computer networks than ever, construction of such systems becomes very difficult. Conventional text-oriented environments are useful in programming individual processors. However, they are obviously insufficient to program a large and complicated system, that includes large numbers of computers connected to each other; such programming is called 'programming in the large.' As a solution for this problem, the authors are developing a graphic programming environment wherein one can visualize complicated software in virtual 3D world. One of the major features of the environment is the 3D representation of concurrent process. 3D representation is used to supply both network-wide interprocess programming capability (capability for 'programming in the large') and real-time programming capability. The authors' idea is to fuse both the block diagram (which is useful to check relationship among large number of processes or processors) and the time chart (which is useful to check precise timing for synchronization) into a single 3D space. The 3D representation gives us a capability for direct and intuitive planning or understanding of complicated relationship among many concurrent processes. To realize the 3D representation, a technology to enable easy handling of virtual 3D object is a definite necessity. Using a stereo display system and a gesture input device (VPL DataGlove), our prototype of the virtual workstation has been implemented. The workstation can supply the 'sensation' of the virtual 3D space to a programmer. Software for the 3D programming environment is implemented on the workstation. According to preliminary assessments, a 50 percent reduction of programming effort is achieved by using the virtual 3D environment. The authors expect that the 3D

  6. High-contrast differentiation resolution 3D imaging of rodent brain by X-ray computed microtomography

    NASA Astrophysics Data System (ADS)

    Zikmund, T.; Novotná, M.; Kavková, M.; Tesařová, M.; Kaucká, M.; Szarowská, B.; Adameyko, I.; Hrubá, E.; Buchtová, M.; Dražanová, E.; Starčuk, Z.; Kaiser, J.

    2018-02-01

    The biomedically focused brain research is largely performed on laboratory mice considering a high homology between the human and mouse genomes. A brain has an intricate and highly complex geometrical structure that is hard to display and analyse using only 2D methods. Applying some fast and efficient methods of brain visualization in 3D will be crucial for the neurobiology in the future. A post-mortem analysis of experimental animals' brains usually involves techniques such as magnetic resonance and computed tomography. These techniques are employed to visualize abnormalities in the brains' morphology or reparation processes. The X-ray computed microtomography (micro CT) plays an important role in the 3D imaging of internal structures of a large variety of soft and hard tissues. This non-destructive technique is applied in biological studies because the lab-based CT devices enable to obtain a several-micrometer resolution. However, this technique is always used along with some visualization methods, which are based on the tissue staining and thus differentiate soft tissues in biological samples. Here, a modified chemical contrasting protocol of tissues for a micro CT usage is introduced as the best tool for ex vivo 3D imaging of a post-mortem mouse brain. This way, the micro CT provides a high spatial resolution of the brain microscopic anatomy together with a high tissue differentiation contrast enabling to identify more anatomical details in the brain. As the micro CT allows a consequent reconstruction of the brain structures into a coherent 3D model, some small morphological changes can be given into context of their mutual spatial relationships.

  7. 3D measurements in conventional X-ray imaging with RGB-D sensors.

    PubMed

    Albiol, Francisco; Corbi, Alberto; Albiol, Alberto

    2017-04-01

    A method for deriving 3D internal information in conventional X-ray settings is presented. It is based on the combination of a pair of radiographs from a patient and it avoids the use of X-ray-opaque fiducials and external reference structures. To achieve this goal, we augment an ordinary X-ray device with a consumer RGB-D camera. The patient' s rotation around the craniocaudal axis is tracked relative to this camera thanks to the depth information provided and the application of a modern surface-mapping algorithm. The measured spatial information is then translated to the reference frame of the X-ray imaging system. By using the intrinsic parameters of the diagnostic equipment, epipolar geometry, and X-ray images of the patient at different angles, 3D internal positions can be obtained. Both the RGB-D and X-ray instruments are first geometrically calibrated to find their joint spatial transformation. The proposed method is applied to three rotating phantoms. The first two consist of an anthropomorphic head and a torso, which are filled with spherical lead bearings at precise locations. The third one is made of simple foam and has metal needles of several known lengths embedded in it. The results show that it is possible to resolve anatomical positions and lengths with a millimetric level of precision. With the proposed approach, internal 3D reconstructed coordinates and distances can be provided to the physician. It also contributes to reducing the invasiveness of ordinary X-ray environments and can replace other types of clinical explorations that are mainly aimed at measuring or geometrically relating elements that are present inside the patient's body. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  8. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  9. An instrument for 3D x-ray nano-imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holler, M.; Raabe, J.; Diaz, A.

    We present an instrument dedicated to 3D scanning x-ray microscopy, allowing a sample to be precisely scanned through a beam while the angle of x-ray incidence can be changed. The position of the sample is controlled with respect to the beam-defining optics by laser interferometry. The instrument achieves a position stability better than 10 nm standard deviation. The instrument performance is assessed using scanning x-ray diffraction microscopy and we demonstrate a resolution of 18 nm in 2D imaging of a lithographic test pattern while the beam was defined by a pinhole of 3 {mu}m in diameter. In 3D on amore » test object of copper interconnects of a microprocessor, a resolution of 53 nm is achieved.« less

  10. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  11. Quantitative visualization of synchronized insulin secretion from 3D-cultured cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Takahiro; Kanamori, Takao; Inouye, Satoshi

    Quantitative visualization of synchronized insulin secretion was performed in an isolated rat pancreatic islet and a spheroid of rat pancreatic beta cell line using a method of video-rate bioluminescence imaging. Video-rate images of insulin secretion from 3D-cultured cells were obtained by expressing the fusion protein of insulin and Gaussia luciferase (Insulin-GLase). A subclonal rat INS-1E cell line stably expressing Insulin-GLase, named iGL, was established and a cluster of iGL cells showed oscillatory insulin secretion that was completely synchronized in response to high glucose. Furthermore, we demonstrated the effect of an antidiabetic drug, glibenclamide, on synchronized insulin secretion from 2D- andmore » 3D-cultured iGL cells. The amount of secreted Insulin-GLase from iGL cells was also determined by a luminometer. Thus, our bioluminescence imaging method could generally be used for investigating protein secretion from living 3D-cultured cells. In addition, iGL cell line would be valuable for evaluating antidiabetic drugs. - Highlights: • An imaging method for protein secretion from 3D-cultured cells was established. • The fused protein of insulin to GLase, Insulin-GLase, was used as a reporter. • Synchronous insulin secretion was visualized in rat islets and spheroidal beta cells. • A rat beta cell line stably expressing Insulin-GLase, named iGL, was established. • Effect of an antidiabetic drug on insulin secretion was visualized in iGL cells.« less

  12. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  13. Multi-modality 3D breast imaging with X-Ray tomosynthesis and automated ultrasound.

    PubMed

    Sinha, Sumedha P; Roubidoux, Marilyn A; Helvie, Mark A; Nees, Alexis V; Goodsitt, Mitchell M; LeCarpentier, Gerald L; Fowlkes, J Brian; Chalek, Carl L; Carson, Paul L

    2007-01-01

    This study evaluated the utility of 3D automated ultrasound in conjunction with 3D digital X-Ray tomosynthesis for breast cancer detection and assessment, to better localize and characterize lesions in the breast. Tomosynthesis image volumes and automated ultrasound image volumes were acquired in the same geometry and in the same view for 27 patients. 3 MQSA certified radiologists independently reviewed the image volumes, visually correlating the images from the two modalities with in-house software. More sophisticated software was used on a smaller set of 10 cases, which enabled the radiologist to draw a 3D box around the suspicious lesion in one image set and isolate an anatomically correlated, similarly boxed region in the other modality image set. In the primary study, correlation was found to be moderately useful to the readers. In the additional study, using improved software, the median usefulness rating increased and confidence in localizing and identifying the suspicious mass increased in more than half the cases. As automated scanning and reading software techniques advance, superior results are expected.

  14. SpreaD3: Interactive Visualization of Spatiotemporal History and Trait Evolutionary Processes.

    PubMed

    Bielejec, Filip; Baele, Guy; Vrancken, Bram; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe

    2016-08-01

    Model-based phylogenetic reconstructions increasingly consider spatial or phenotypic traits in conjunction with sequence data to study evolutionary processes. Alongside parameter estimation, visualization of ancestral reconstructions represents an integral part of these analyses. Here, we present a complete overhaul of the spatial phylogenetic reconstruction of evolutionary dynamics software, now called SpreaD3 to emphasize the use of data-driven documents, as an analysis and visualization package that primarily complements Bayesian inference in BEAST (http://beast.bio.ed.ac.uk, last accessed 9 May 2016). The integration of JavaScript D3 libraries (www.d3.org, last accessed 9 May 2016) offers novel interactive web-based visualization capacities that are not restricted to spatial traits and extend to any discrete or continuously valued trait for any organism of interest. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  15. Towards a gestural 3D interaction for tangible and three-dimensional GIS visualizations

    NASA Astrophysics Data System (ADS)

    Partsinevelos, Panagiotis; Agadakos, Ioannis; Pattakos, Nikolas; Maragakis, Michail

    2014-05-01

    The last decade has been characterized by a significant increase of spatially dependent applications that require storage, visualization, analysis and exploration of geographic information. GIS analysis of spatiotemporal geographic data is operated by highly trained personnel under an abundance of software and tools, lacking interoperability and friendly user interaction. Towards this end, new forms of querying and interaction are emerging, including gestural interfaces. Three-dimensional GIS representations refer to either tangible surfaces or projected representations. Making a 3D tangible geographic representation touch-sensitive may be a convenient solution, but such an approach raises the cost significantly and complicates the hardware and processing required to combine touch-sensitive material (for pinpointing points) with deformable material (for displaying elevations). In this study, a novel interaction scheme upon a three dimensional visualization of GIS data is proposed. While gesture user interfaces are not yet fully acceptable due to inconsistencies and complexity, a non-tangible GIS system where 3D visualizations are projected, calls for interactions that are based on three-dimensional, non-contact and gestural procedures. Towards these objectives, we use the Microsoft Kinect II system which includes a time of flight camera, allowing for a robust and real time depth map generation, along with the capturing and translation of a variety of predefined gestures from different simultaneous users. By incorporating these features into our system architecture, we attempt to create a natural way for users to operate on GIS data. Apart from the conventional pan and zoom features, the key functions addressed for the 3-D user interface is the ability to pinpoint particular points, lines and areas of interest, such as destinations, waypoints, landmarks, closed areas, etc. The first results shown, concern a projected GIS representation where the user selects points

  16. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  17. 3D Immersive Visualization: An Educational Tool in Geosciences

    NASA Astrophysics Data System (ADS)

    Pérez-Campos, N.; Cárdenas-Soto, M.; Juárez-Casas, M.; Castrejón-Pineda, R.

    2007-05-01

    3D immersive visualization is an innovative tool currently used in various disciplines, such as medicine, architecture, engineering, video games, etc. Recently, the Universidad Nacional Autónoma de México (UNAM) mounted a visualization theater (Ixtli) with leading edge technology, for academic and research purposes that require immersive 3D tools for a better understanding of the concepts involved. The Division of Engineering in Earth Sciences of the School of Engineering, UNAM, is running a project focused on visualization of geoscience data. Its objective is to incoporate educational material in geoscience courses in order to support and to improve the teaching-learning process, especially in well-known difficult topics for students. As part of the project, proffessors and students are trained in visualization techniques, then their data are adapted and visualized in Ixtli as part of a class or a seminar, where all the attendants can interact, not only among each other but also with the object under study. As part of our results, we present specific examples used in basic geophysics courses, such as interpreted seismic cubes, seismic-wave propagation models, and structural models from bathymetric, gravimetric and seismological data; as well as examples from ongoing applied projects, such as a modeled SH upward wave, the occurrence of an earthquake cluster in 1999 in the Popocatepetl volcano, and a risk atlas from Delegación Alvaro Obregón in Mexico City. All these examples, plus those to come, constitute a library for students and professors willing to explore another dimension of the teaching-learning process. Furthermore, this experience can be enhaced by rich discussions and interactions by videoconferences with other universities and researchers.

  18. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  19. Toward the establishment of design guidelines for effective 3D perspective interfaces

    NASA Astrophysics Data System (ADS)

    Fitzhugh, Elisabeth; Dixon, Sharon; Aleva, Denise; Smith, Eric; Ghrayeb, Joseph; Douglas, Lisa

    2009-05-01

    The propagation of information operation technologies, with correspondingly vast amounts of complex network information to be conveyed, significantly impacts operator workload. Information management research is rife with efforts to develop schemes to aid operators to identify, review, organize, and retrieve the wealth of available data. Data may take on such distinct forms as intelligence libraries, logistics databases, operational environment models, or network topologies. Increased use of taxonomies and semantic technologies opens opportunities to employ network visualization as a display mechanism for diverse information aggregations. The broad applicability of network visualizations is still being tested, but in current usage, the complexity of densely populated abstract networks suggests the potential utility of 3D. Employment of 2.5D in network visualization, using classic perceptual cues, creates a 3D experience within a 2D medium. It is anticipated that use of 3D perspective (2.5D) will enhance user ability to visually inspect large, complex, multidimensional networks. Current research for 2.5D visualizations demonstrates that display attributes, including color, shape, size, lighting, atmospheric effects, and shadows, significantly impact operator experience. However, guidelines for utilization of attributes in display design are limited. This paper discusses pilot experimentation intended to identify potential problem areas arising from these cues and determine how best to optimize perceptual cue settings. Development of optimized design guidelines will ensure that future experiments, comparing network displays with other visualizations, are not confounded or impeded by suboptimal attribute characterization. Current experimentation is anticipated to support development of cost-effective, visually effective methods to implement 3D in military applications.

  20. Approximation of a foreign object using x-rays, reference photographs and 3D reconstruction techniques.

    PubMed

    Briggs, Matt; Shanmugam, Mohan

    2013-12-01

    This case study describes how a 3D animation was created to approximate the depth and angle of a foreign object (metal bar) that had become embedded into a patient's head. A pre-operative CT scan was not available as the patient could not fit though the CT scanner, therefore a post surgical CT scan, x-ray and photographic images were used. A surface render was made of the skull and imported into Blender (a 3D animation application). The metal bar was not available, however images of a similar object that was retrieved from the scene by the ambulance crew were used to recreate a 3D model. The x-ray images were then imported into Blender and used as background images in order to align the skull reconstruction and metal bar at the correct depth/angle. A 3D animation was then created to fully illustrate the angle and depth of the iron bar in the skull.

  1. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    NASA Astrophysics Data System (ADS)

    Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.

    1997-05-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.

  2. Development of a 3-D Nuclear Event Visualization Program Using Unity

    NASA Astrophysics Data System (ADS)

    Kuhn, Victoria

    2017-09-01

    Simulations have become increasingly important for science and there is an increasing emphasis on the visualization of simulations within a Virtual Reality (VR) environment. Our group is exploring this capability as a visualization tool not just for those curious about science, but also for educational purposes for K-12 students. Using data collected in 3-D by a Time Projection Chamber (TPC), we are able to visualize nuclear and cosmic events. The Unity game engine was used to recreate the TPC to visualize these events and construct a VR application. The methods used to create these simulations will be presented along with an example of a simulation. I will also present on the development and testing of this program, which I carried out this past summer at MSU as part of an REU program. We used data from the S πRIT TPC, but the software can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.

  3. Visualization of anthropometric measures of workers in computer 3D modeling of work place.

    PubMed

    Mijović, B; Ujević, D; Baksa, S

    2001-12-01

    In this work, 3D visualization of a work place by means of a computer-made 3D-machine model and computer animation of a worker have been performed. By visualization of 3D characters in inverse kinematic and dynamic relation with the operating part of a machine, the biomechanic characteristics of worker's body have been determined. The dimensions of a machine have been determined by an inspection of technical documentation as well as by direct measurements and recordings of the machine by camera. On the basis of measured body height of workers all relevant anthropometric measures have been determined by a computer program developed by the authors. By knowing the anthropometric measures, the vision fields and the scope zones while forming work places, exact postures of workers while performing technological procedures were determined. The minimal and maximal rotation angles and the translation of upper and lower arm which are basis for the analysis of worker burdening were analyzed. The dimensions of the seized space of a body are obtained by computer anthropometric analysis of movement, e.g. range of arms, position of legs, head, back. The influence of forming of a work place on correct postures of workers during work has been reconsidered and thus the consumption of energy and fatigue can be reduced to a minimum.

  4. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  5. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  6. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  7. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  8. Recent Advances in Visualizing 3D Flow with LIC

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1998-01-01

    Line Integral Convolution (LIC), introduced by Cabral and Leedom in 1993, is an elegant and versatile technique for representing directional information via patterns of correlation in a texture. Although most commonly used to depict 2D flow, or flow over a surface in 3D, LIC methods can equivalently be used to portray 3D flow through a volume. However, the popularity of LIC as a device for illustrating 3D flow has historically been limited both by the computational expense of generating and rendering such a 3D texture and by the difficulties inherent in clearly and effectively conveying the directional information embodied in the volumetric output textures that are produced. In an earlier paper, we briefly discussed some of the factors that may underlie the perceptual difficulties that we can encounter with dense 3D displays and outlined several strategies for more effectively visualizing 3D flow with volume LIC. In this article, we review in more detail techniques for selectively emphasizing critical regions of interest in a flow and for facilitating the accurate perception of the 3D depth and orientation of overlapping streamlines, and we demonstrate new methods for efficiently incorporating an indication of orientation into a flow representation and for conveying additional information about related scalar quantities such as temperature or vorticity over a flow via subtle, continuous line width and color variations.

  9. Quantitative Visualization of Salt Concentration Distributions in Lithium-Ion Battery Electrolytes during Battery Operation Using X-ray Phase Imaging.

    PubMed

    Takamatsu, Daiko; Yoneyama, Akio; Asari, Yusuke; Hirano, Tatsumi

    2018-02-07

    A fundamental understanding of concentrations of salts in lithium-ion battery electrolytes during battery operation is important for optimal operation and design of lithium-ion batteries. However, there are few techniques that can be used to quantitatively characterize salt concentration distributions in the electrolytes during battery operation. In this paper, we demonstrate that in operando X-ray phase imaging can quantitatively visualize the salt concentration distributions that arise in electrolytes during battery operation. From quantitative evaluation of the concentration distributions at steady states, we obtained the salt diffusivities in electrolytes with different initial salt concentrations. Because of no restriction on samples and high temporal and spatial resolutions, X-ray phase imaging will be a versatile technique for evaluating electrolytes, both aqueous and nonaqueous, of many electrochemical systems.

  10. Near-thermal reactions of Au(+)(1S,3D) with CH3X (X = F,Cl).

    PubMed

    Taylor, William S; Matthews, Cullen C; Hicks, Ashley J; Fancher, Kendall G; Chen, Li Chen

    2012-01-26

    Reactions of Au(+)((1)S) and Au(+)((3)D) with CH(3)F and CH(3)Cl have been carried out in a drift cell in He at a pressure of 3.5 Torr at both room temperature and reduced temperatures in order to explore the influence of the electronic state of the metal on reaction outcomes. State-specific product channels and overall two-body rate constants were identified using electronic state chromatography. These results indicate that Au(+)((1)S) reacts to yield an association product in addition to AuCH(2)(+) in parallel steps with both neutrals. Product distributions for association vs HX elimination were determined to be 79% association/21% HX elimination for X = F and 50% association/50% HX elimination when X = Cl. Reaction of Au(+)((3)D) with CH(3)F also results in HF elimination, which in this case is thought to produce (3)AuCH(2)(+). With CH(3)Cl, Au(+)((3)D) reacts to form AuCH(3)(+) and CH(3)Cl(+) in parallel steps. An additional product channel initiated by Au(+)((3)D) is also observed with both methyl halides, which yields CH(2)X(+) as a higher-order product. Kinetic measurements indicate that the reaction efficiency for both Au(+) states is significantly greater with CH(3)Cl than with CH(3)F. The observed two-body rate constant for depletion of Au(+)((1)S) by CH(3)F represents less than 5% of the limiting rate constant predicted by the average dipole orientation model (ADO) at room temperature and 226 K, whereas CH(3)Cl reacts with Au(+)((1)S) at the ADO limit at both room temperature and 218 K. Rate constants for depletion of Au(+)((3)D) by CH(3)F and CH(3)Cl were measured at 226 and 218 K respectively, and indicate that Au(+)((3)D) is consumed at approximately 2% of the ADO limit by CH(3)F and 69% of the ADO limit by CH(3)Cl. Product formation and overall efficiency for all four reactions are consistent with previous experimental results and available theoretical models.

  11. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  12. MRI segmentation by active contours model, 3D reconstruction, and visualization

    NASA Astrophysics Data System (ADS)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  13. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle

  14. 3D X-ray ultra-microscopy of bone tissue.

    PubMed

    Langer, M; Peyrin, F

    2016-02-01

    We review the current X-ray techniques with 3D imaging capability at the nano-scale: transmission X-ray microscopy, ptychography and in-line phase nano-tomography. We further review the different ultra-structural features that have so far been resolved: the lacuno-canalicular network, collagen orientation, nano-scale mineralization and their use as basis for mechanical simulations. X-ray computed tomography at the micro-metric scale is increasingly considered as the reference technique in imaging of bone micro-structure. The trend has been to push towards increasingly higher resolution. Due to the difficulty of realizing optics in the hard X-ray regime, the magnification has mainly been due to the use of visible light optics and indirect detection of the X-rays, which limits the attainable resolution with respect to the wavelength of the visible light used in detection. Recent developments in X-ray optics and instrumentation have allowed to implement several types of methods that achieve imaging that is limited in resolution by the X-ray wavelength, thus enabling computed tomography at the nano-scale. We review here the X-ray techniques with 3D imaging capability at the nano-scale: transmission X-ray microscopy, ptychography and in-line phase nano-tomography. Further, we review the different ultra-structural features that have so far been resolved and the applications that have been reported: imaging of the lacuno-canalicular network, direct analysis of collagen orientation, analysis of mineralization on the nano-scale and use of 3D images at the nano-scale to drive mechanical simulations. Finally, we discuss the issue of going beyond qualitative description to quantification of ultra-structural features.

  15. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less

  16. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.

    PubMed

    Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young

    2016-04-01

    In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.

  17. Immersive 3D Visualization of Astronomical Data

    NASA Astrophysics Data System (ADS)

    Schaaff, A.; Berthier, J.; Da Rocha, J.; Deparis, N.; Derriere, S.; Gaultier, P.; Houpin, R.; Normand, J.; Ocvirk, P.

    2015-09-01

    The immersive-3D visualization, or Virtual Reality in our study, was previously dedicated to specific uses (research, flight simulators, etc.) The investment in infrastructure and its cost was reserved to large laboratories or companies. Lately we saw the development of immersive-3D masks intended for wide distribution, for example the Oculus Rift and the Sony Morpheus projects. The usual reaction is to say that these tools are primarily intended for games since it is easy to imagine a player in a virtual environment and the added value to conventional 2D screens. Yet it is likely that there are many applications in the professional field if these tools are becoming common. Introducing this technology into existing applications or new developments makes sense only if interest is properly evaluated. The use in Astronomy is clear for education, it is easy to imagine mobile and light planetariums or to reproduce poorly accessible environments (e.g., large instruments). In contrast, in the field of professional astronomy the use is probably less obvious and it requires to conduct studies to determine the most appropriate ones and to assess the contributions compared to the other display modes.

  18. 3D Building Evacuation Route Modelling and Visualization

    NASA Astrophysics Data System (ADS)

    Chan, W.; Armenakis, C.

    2014-11-01

    The most common building evacuation approach currently applied is to have evacuation routes planned prior to these emergency events. These routes are usually the shortest and most practical path from each building room to the closest exit. The problem with this approach is that it is not adaptive. It is not responsively configurable relative to the type, intensity, or location of the emergency risk. Moreover, it does not provide any information to the affected persons or to the emergency responders while not allowing for the review of simulated hazard scenarios and alternative evacuation routes. In this paper we address two main tasks. The first is the modelling of the spatial risk caused by a hazardous event leading to choosing the optimal evacuation route for a set of options. The second is to generate a 3D visual representation of the model output. A multicriteria decision making (MCDM) approach is used to model the risk aiming at finding the optimal evacuation route. This is achieved by using the analytical hierarchy process (AHP) on the criteria describing the different alternative evacuation routes. The best route is then chosen to be the alternative with the least cost. The 3D visual representation of the model displays the building, the surrounding environment, the evacuee's location, the hazard location, the risk areas and the optimal evacuation pathway to the target safety location. The work has been performed using ESRI's ArcGIS. Using the developed models, the user can input the location of the hazard and the location of the evacuee. The system then determines the optimum evacuation route and displays it in 3D.

  19. A 3D particle visualization system for temperature management

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rodriguez, N.; Puech, W.; Rey, H.; Vasques, X.

    2011-01-01

    This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods applied to improve our solution.

  20. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  1. Web-based interactive 3D visualization as a tool for improved anatomy learning.

    PubMed

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain from its use in reaching their anatomical learning objectives. Several 3D vascular VR models were created using an interactive segmentation tool based on the "virtual contrast injection" method. This method allows users, with relative ease, to convert computer tomography or magnetic resonance images into vivid 3D VR movies using the OsiriX software equipped with the CMIV CTA plug-in. Once created using the segmentation tool, the image series were exported in Quick Time Virtual Reality (QTVR) format and integrated within a web framework of the Educational Virtual Anatomy (EVA) program. A total of nine QTVR movies were produced encompassing most of the major arteries of the body. These movies were supplemented with associated information, color keys, and notes. The results indicate that, in general, students' attitudes towards the EVA-program were positive when compared with anatomy textbooks, but results were not the same with dissections. Additionally, knowledge tests suggest a potentially beneficial effect on learning.

  2. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  3. 3D-PTV around Operational Wind Turbines

    NASA Astrophysics Data System (ADS)

    Brownstein, Ian; Dabiri, John

    2016-11-01

    Laboratory studies and numerical simulations of wind turbines are typically constrained in how they can inform operational turbine behavior. Laboratory experiments are usually unable to match both pertinent parameters of full-scale wind turbines, the Reynolds number (Re) and tip speed ratio, using scaled-down models. Additionally, numerical simulations of the flow around wind turbines are constrained by the large domain size and high Re that need to be simulated. When these simulations are preformed, turbine geometry is typically simplified resulting in flow structures near the rotor not being well resolved. In order to bypass these limitations, a quantitative flow visualization method was developed to take in situ measurements of the flow around wind turbines at the Field Laboratory for Optimized Wind Energy (FLOWE) in Lancaster, CA. The apparatus constructed was able to seed an approximately 9m x 9m x 5m volume in the wake of the turbine using artificial snow. Quantitative measurements were obtained by tracking the evolution of the artificial snow using a four camera setup. The methodology for calibrating and collecting data, as well as preliminary results detailing the flow around a 2kW vertical-axis wind turbine (VAWT), will be presented.

  4. X-Ray Nanofocus CT: Visualising Of Internal 3D-Structures With Submicrometer Resolution

    NASA Astrophysics Data System (ADS)

    Weinekoetter, Christian

    2008-09-01

    High-resolution X-ray Computed Tomography (CT) allows the visualization and failure analysis of the internal micro structure of objects—even if they have complicated 3D-structures where 2D X-ray microscopy would give unclear information. During the past several years, computed tomography has progressed to higher resolution and quicker reconstruction of the 3D-volume. Most recently it even allows a three-dimensional look into the inside of materials with submicron resolution. With the use of nanofocus® tube technology, nanoCT®-systems are pushing forward into application fields that were exclusive to high cost and rare available synchrotron techniques. The study was performed with the new nanotom, a very compact laboratory system which allows the analysis of samples up to 120 mm in diameter and weighing up to 1 kg with exceptional voxel-resolution down to <500 nm (<0.5 microns). It is the first 180 kV nanofocus® computed tomography system in the world which is tailored specifically to the highest-resolution applications in the fields of material science, micro electronics, geology and biology. Therefore it is particularly suitable for nanoCT-examinations e.g. of synthetic materials, metals, ceramics, composite materials, mineral and organic samples. There are a few physical effects influencing the CT quality, such as beam-hardening within the sample or ring-artefacts, which can not be completely avoided. To optimize the quality of high resolution 3D volumes, the nanotom® includes a variety of effective software tools to reduce ring-artefacts and correct beam hardenings or drift effects which occurred during data acquisition. The resulting CT volume data set can be displayed in various ways, for example by virtual slicing and sectional views in any direction of the volume. By the fact that this requires only a mouse click, this technique will substitute destructive mechanical slicing and cutting in many applications. The initial CT results obtained with the

  5. Effects of intra-operative fluoroscopic 3D-imaging on peri-operative imaging strategy in calcaneal fracture surgery.

    PubMed

    Beerekamp, M S H; Backes, M; Schep, N W L; Ubbink, D T; Luitse, J S; Schepers, T; Goslings, J C

    2017-12-01

    Previous studies demonstrated that intra-operative fluoroscopic 3D-imaging (3D-imaging) in calcaneal fracture surgery is promising to prevent revision surgery and save costs. However, these studies limited their focus to corrections performed after 3D-imaging, thereby neglecting corrections after intra-operative fluoroscopic 2D-imaging (2D-imaging). The aim of this study was to assess the effects of additional 3D-imaging on intra-operative corrections, peri-operative imaging used, and patient-relevant outcomes compared to 2D-imaging alone. In this before-after study, data of adult patients who underwent open reduction and internal fixation (ORIF) of a calcaneal fracture between 2000 and 2014 in our level-I Trauma center were collected. 3D-imaging (BV Pulsera with 3D-RX, Philips Healthcare, Best, The Netherlands) was available as of 2007 at the surgeons' discretion. Patient and fracture characteristics, peri-operative imaging, intra-operative corrections and patient-relevant outcomes were collected from the hospital databases. Patients in whom additional 3D-imaging was applied were compared to those undergoing 2D-imaging alone. A total of 231 patients were included of whom 107 (46%) were operated with the use of 3D-imaging. No significant differences were found in baseline characteristics. The median duration of surgery was significantly longer when using 3D-imaging (2:08 vs. 1:54 h; p = 0.002). Corrections after additional 3D-imaging were performed in 53% of the patients. However, significantly fewer corrections were made after 2D-imaging when 3D-imaging was available (Risk difference (RD) -15%; 95% Confidence interval (CI) -29 to -2). Peri-operative imaging, besides intra-operative 3D-imaging, and patient-relevant outcomes were similar between groups. Intra-operative 3D-imaging provides additional information resulting in additional corrections. Moreover, 3D-imaging probably changed the surgeons' attitude to rely more on 3D-imaging, hence a 15%-decrease of

  6. D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges

    NASA Astrophysics Data System (ADS)

    Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.

    2017-11-01

    3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.

  7. Global SO(3) x SO(3) x U(1) symmetry of the Hubbard model on bipartite lattices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carmelo, J.M.P., E-mail: carmelo@fisica.uminho.p; Ostlund, Stellan; Sampaio, M.J.

    2010-08-15

    In this paper the global symmetry of the Hubbard model on a bipartite lattice is found to be larger than SO(4). The model is one of the most studied many-particle quantum problems, yet except in one dimension it has no exact solution, so that there remain many open questions about its properties. Symmetry plays an important role in physics and often can be used to extract useful information on unsolved non-perturbative quantum problems. Specifically, here it is found that for on-site interaction U {ne} 0 the local SU(2) x SU(2) x U(1) gauge symmetry of the Hubbard model on amore » bipartite lattice with N{sub a}{sup D} sites and vanishing transfer integral t = 0 can be lifted to a global [SU(2) x SU(2) x U(1)]/Z{sub 2}{sup 2} = SO(3) x SO(3) x U(1) symmetry in the presence of the kinetic-energy hopping term of the Hamiltonian with t > 0. (Examples of a bipartite lattice are the D-dimensional cubic lattices of lattice constant a and edge length L = N{sub a}a for which D = 1, 2, 3,... in the number N{sub a}{sup D} of sites.) The generator of the new found hidden independent charge global U(1) symmetry, which is not related to the ordinary U(1) gauge subgroup of electromagnetism, is one half the rotated-electron number of singly occupied sites operator. Although addition of chemical-potential and magnetic-field operator terms to the model Hamiltonian lowers its symmetry, such terms commute with it. Therefore, its 4{sup N}{sub a}{sup D} energy eigenstates refer to representations of the new found global [SU(2) x SU(2) x U(1)]/Z{sub 2}{sup 2} = SO(3) x SO(3) x U(1) symmetry. Consistently, we find that for the Hubbard model on a bipartite lattice the number of independent representations of the group SO(3) x SO(3) x U(1) equals the Hilbert-space dimension 4{sup N}{sub a}{sup D}. It is confirmed elsewhere that the new found symmetry has important physical consequences.« less

  8. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  9. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    PubMed Central

    2010-01-01

    Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623

  10. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  11. McIDAS-V: Advanced Visualization for 3D Remote Sensing Data

    NASA Astrophysics Data System (ADS)

    Rink, T.; Achtor, T. H.

    2010-12-01

    McIDAS-V is a Java-based, open-source, freely available software package for analysis and visualization of geophysical data. Its advanced capabilities provide very interactive 4-D displays, including 3D volumetric rendering and fast sub-manifold slicing, linked to an abstract mathematical data model with built-in metadata for units, coordinate system transforms and sampling topology. A Jython interface provides user defined analysis and computation in terms of the internal data model. These powerful capabilities to integrate data, analysis and visualization are being applied to hyper-spectral sounding retrievals, eg. AIRS and IASI, of moisture and cloud density to interrogate and analyze their 3D structure, as well as, validate with instruments such as CALIPSO, CloudSat and MODIS. The object oriented framework design allows for specialized extensions for novel displays and new sources of data. Community defined CF-conventions for gridded data are understood by the software, and can be immediately imported into the application. This presentation will show examples how McIDAS-V is used in 3-dimensional data analysis, display and evaluation.

  12. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  13. 3D visualization of molecular structures in the MOGADOC database

    NASA Astrophysics Data System (ADS)

    Vogt, Natalja; Popov, Evgeny; Rudert, Rainer; Kramer, Rüdiger; Vogt, Jürgen

    2010-08-01

    The MOGADOC database (Molecular Gas-Phase Documentation) is a powerful tool to retrieve information about compounds which have been studied in the gas-phase by electron diffraction, microwave spectroscopy and molecular radio astronomy. Presently the database contains over 34,500 bibliographic references (from the beginning of each method) for about 10,000 inorganic, organic and organometallic compounds and structural data (bond lengths, bond angles, dihedral angles, etc.) for about 7800 compounds. Most of the implemented molecular structures are given in a three-dimensional (3D) presentation. To create or edit and visualize the 3D images of molecules, new tools (special editor and Java-based 3D applet) were developed. Molecular structures in internal coordinates were converted to those in Cartesian coordinates.

  14. Automatic transfer function generation for volume rendering of high-resolution x-ray 3D digital mammography images

    NASA Astrophysics Data System (ADS)

    Alyassin, Abdal M.

    2002-05-01

    3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  17. 3D visualization of membrane failures in fuel cells

    NASA Astrophysics Data System (ADS)

    Singh, Yadvinder; Orfino, Francesco P.; Dutta, Monica; Kjeang, Erik

    2017-03-01

    Durability issues in fuel cells, due to chemical and mechanical degradation, are potential impediments in their commercialization. Hydrogen leak development across degraded fuel cell membranes is deemed a lifetime-limiting failure mode and potential safety issue that requires thorough characterization for devising effective mitigation strategies. The scope and depth of failure analysis has, however, been limited by the 2D nature of conventional imaging. In the present work, X-ray computed tomography is introduced as a novel, non-destructive technique for 3D failure analysis. Its capability to acquire true 3D images of membrane damage is demonstrated for the very first time. This approach has enabled unique and in-depth analysis resulting in novel findings regarding the membrane degradation mechanism; these are: significant, exclusive membrane fracture development independent of catalyst layers, localized thinning at crack sites, and demonstration of the critical impact of cracks on fuel cell durability. Evidence of crack initiation within the membrane is demonstrated, and a possible new failure mode different from typical mechanical crack development is identified. X-ray computed tomography is hereby established as a breakthrough approach for comprehensive 3D characterization and reliable failure analysis of fuel cell membranes, and could readily be extended to electrolyzers and flow batteries having similar structure.

  18. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  19. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  20. Visualizing 3D Fracture Morphology in Granular Media

    NASA Astrophysics Data System (ADS)

    Dalbe, M. J.; Juanes, R.

    2015-12-01

    Multiphase flow in porous media plays a fundamental role in many natural and engineered subsurface processes. The interplay between fluid flow, medium deformation and fracture is essential in geoscience problems as disparate as fracking for unconventional hydrocarbon production, conduit formation and methane venting from lake and ocean sediments, and desiccation cracks in soil. Recent work has pointed to the importance of capillary forces in some relevant regimes of fracturing of granular materials (Sandnes et al., Nat. Comm. 2011), leading to the term hydro-capillary fracturing (Holtzman et al., PRL 2012). Most of these experimental and computational investigations have focused, however, on 2D or quasi-2D systems. Here, we develop an experimental set-up that allows us to observe two-phase flow in a 3D granular bed, and control the level of confining stress. We use an index matching technique to directly visualize the injection of a liquid in a granular media saturated with another, immiscible liquid. We determine the key dimensionless groups that control the behavior of the system, and elucidate different regimes of the invasion pattern. We present result for the 3D morphology of the invasion, with particular emphasis on the fracturing regime.

  1. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  2. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  3. 3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution

    NASA Astrophysics Data System (ADS)

    Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David

    2013-01-01

    Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.

  4. Early detection of glaucoma by means of a novel 3D computer‐automated visual field test

    PubMed Central

    Nazemi, Paul P; Fink, Wolfgang; Sadun, Alfredo A; Francis, Brian; Minckler, Donald

    2007-01-01

    Purpose A recently devised 3D computer‐automated threshold Amsler grid test was used to identify early and distinctive defects in people with suspected glaucoma. Further, the location, shape and depth of these field defects were characterised. Finally, the visual fields were compared with those obtained by standard automated perimetry. Patients and methods Glaucoma suspects were defined as those having elevated intraocular pressure (>21 mm Hg) or cup‐to‐disc ratio of >0.5. 33 patients and 66 eyes with risk factors for glaucoma were examined. 15 patients and 23 eyes with no risk factors were tested as controls. The recently developed 3D computer‐automated threshold Amsler grid test was used. The test exhibits a grid on a computer screen at a preselected greyscale and angular resolution, and allows patients to trace those areas on the grid that are missing in their visual field using a touch screen. The 5‐minute test required that the patients repeatedly outline scotomas on a touch screen with varied displays of contrast while maintaining their gaze on a central fixation marker. A 3D depiction of the visual field defects was then obtained that was further characterised by the location, shape and depth of the scotomas. The exam was repeated three times per eye. The results were compared to Humphrey visual field tests (ie, achromatic standard or SITA standard 30‐2 or 24‐2). Results In this pilot study 79% of the eyes tested in the glaucoma‐suspect group repeatedly demonstrated visual field loss with the 3D perimetry. The 3D depictions of visual field loss associated with these risk factors were all characteristic of or compatible with glaucoma. 71% of the eyes demonstrated arcuate defects or a nasal step. Constricted visual fields were shown in 29% of the eyes. No visual field changes were detected in the control group. Conclusions The 3D computer‐automated threshold Amsler grid test may demonstrate visual field abnormalities characteristic of

  5. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  6. X-33 Flight Visualization

    NASA Technical Reports Server (NTRS)

    Laue, Jay H.

    1998-01-01

    The X-33 flight visualization effort has resulted in the integration of high-resolution terrain data with vehicle position and attitude data for planned flights of the X-33 vehicle from its launch site at Edwards AFB, California, to landings at Michael Army Air Field, Utah, and Maelstrom AFB, Montana. Video and Web Site representations of these flight visualizations were produced. In addition, a totally new module was developed to control viewpoints in real-time using a joystick input. Efforts have been initiated, and are presently being continued, for real-time flight coverage visualizations using the data streams from the X-33 vehicle flights. The flight visualizations that have resulted thus far give convincing support to the expectation that the flights of the X-33 will be exciting and significant space flight milestones... flights of this nation's one-half scale predecessor to its first single-stage-to-orbit, fully-reusable launch vehicle system.

  7. [Application of 3D visualization technique in breast cancer surgery with immediate breast reconstruction using laparoscopically harvested pedicled latissimus dorsi muscle flap].

    PubMed

    Zhang, Pu-Sheng; Wang, Li-Kun; Luo, Yun-Feng; Shi, Fu-Jun; He, Lin-Yun; Zeng, Cheng-Bing; Zhang, Yu; Fang, Chi-Hua

    2017-08-20

    To study the value of 3D visualization technique in breast-preserving surgery for breast cancer with immediate breast reconstruction using laparoscopically harvested pedicled latissimus dorsi muscle flap. From January, 2015 to May, 2016, 30 patients with breast cancer underwent breast-preserving surgery with immediate breast reconstruction using pedicled latissimus dorsi muscle flap. The CT data of the arterial phase and venous phase were collected preoperatively and imported into the self-developed medical image 3D visualization system for image segmentation and 3D reconstruction. The 3D models were imported into the simulation surgery platform for virtual surgery to prepare for subsequent surgeries. The cosmetic outcomes of the patients were evaluated 6 months after the surgery. Another 18 patients with breast cancer who underwent laparoscopic latissimus dorsi muscle breast reconstruction without using 3D visualization technique from January to December, 2014 served as the control group. The data of the operative time, intraoperative blood loss and postoperative appearance of the breasts were analyzed. The reconstructed 3D model clearly displayed the anatomical structures of the breast, armpit, latissimus dorsi muscle and vessels and their anatomical relationship in all the 30 cases. Immediate breast reconstruction was performed successfully in all the cases with median operation time of 226 min (range, 210 to 420 min), a median blood loss of 95 mL (range, 73 to 132 mL). Evaluation of the appearance of the breast showed excellent results in 22 cases, good appearance in 6 cases and acceptable appearance in 2 cases. In the control group, the median operation time was 283 min (range, 256 to 313 min) and the median blood loss was 107 mL (range, 79 to 147 mL) with excellent appearance of the breasts in 10 cases, good appearance in 4 cases and acceptable appearance in 4 cases. 3D reconstruction technique can clearly display the morphology of the latissimus dorsi and

  8. Air Traffic Control and Combat Control Team Operations, AFS 272X0/D.

    DTIC Science & Technology

    1980-12-01

    LN4LASSIFXED DE 8.NuAD.___ UNITED STATES AIR FORCE A-’IR TRAFFIC CONTROL AND COMBAT/ . _ ) ~E: ;ONTROLIEAM OPERATIONS E’.. . --.ET E AFS 272xG/D,) O...Occupational Measurement Center, Randolph AFB, Texas 78148. Computer programs for analyzing the occupational data were designed by Dr. Raymond E...remained relatively the same in terms of numerical designation and tasks performed. Formal training for both 272X0 and 272XOD entry-level personnel consists

  9. The application of digital medical 3D printing technology on tumor operation

    NASA Astrophysics Data System (ADS)

    Chen, Jimin; Jiang, Yijian; Li, Yangsheng

    2016-04-01

    Digital medical 3D printing technology is a new hi-tech which combines traditional medical and digital design, computer science, bio technology and 3D print technology. At the present time there are four levels application: The printed 3D model is the first and simple application. The surgery makes use of the model to plan the processing before operation. The second is customized operation tools such as implant guide. It helps doctor to operate with special tools rather than the normal medical tools. The third level application of 3D printing in medical area is to print artificial bones or teeth to implant into human body. The big challenge is the fourth level which is to print organs with 3D printing technology. In this paper we introduced an application of 3D printing technology in tumor operation. We use 3D printing to print guide for invasion operation. Puncture needles were guided by printed guide in face tumors operation. It is concluded that this new type guide is dominantly advantageous.

  10. 3D visualization of solar wind ion data from the Chang'E-1 exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Tian; Sun, Yankui; Tang, Zesheng

    2011-10-01

    Chang'E-1 (abbreviation CE-1), China's first Moon-orbiting spacecraft launched in 2007, carried equipment called the Solar Wind Ion Detector (abbreviation SWID), which sent back tens of gigabytes of solar wind ion differential number flux data. These data are essential for furthering our understanding of the cislunar space environment. However, to fully comprehend and analyze these data presents considerable difficulties, not only because of their huge size (57 GB), but also because of their complexity. Therefore, a new 3D visualization method is developed to give a more intuitive representation than traditional 1D and 2D visualizations, and in particular to offer a better indication of the direction of the incident ion differential number flux and the relative spatial position of CE-1 with respect to the Sun, the Earth, and the Moon. First, a coordinate system named Selenocentric Solar Ecliptic (SSE) which is more suitable for our goal is chosen, and solar wind ion differential number flux vectors in SSE are calculated from Geocentric Solar Ecliptic System (GSE) and Moon Center Coordinate (MCC) coordinates of the spacecraft, and then the ion differential number flux distribution in SSE is visualized in 3D space. This visualization method is integrated into an interactive visualization analysis software tool named vtSWIDs, developed in MATLAB, which enables researchers to browse through numerous records and manipulate the visualization results in real time. The tool also provides some useful statistical analysis functions, and can be easily expanded.

  11. Does 3D produce more symptoms of visually induced motion sickness?

    PubMed

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  12. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  13. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  14. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  15. An Approach to Develop 3d Geo-Dbms Topological Operators by Re-Using Existing 2d Operators

    NASA Astrophysics Data System (ADS)

    Xu, D.; Zlatanova, S.

    2013-09-01

    Database systems are continuously extending their capabilities to store, process and analyse 3D data. Topological relationships which describe the interaction of objects in space is one of the important spatial issues. However, spatial operators for 3D objects are still insufficient. In this paper we present the development of a new 3D topological function to distinguish intersections of 3D planar polygons. The development uses existing 2D functions in the DBMS and two geometric transformations (rotation and projection). This function is tested for a real dataset to detect overlapping 3D city objects. The paper presents the algorithms and analyses the challenges. Suggestions for improvements of the current algorithm as well as possible extensions to handle more 3D topological cases are discussed at the end.

  16. Comparison of 3D reconstruction of mandible for pre-operative planning using commercial and open-source software

    NASA Astrophysics Data System (ADS)

    Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad

    2016-12-01

    3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.

  17. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    NASA Astrophysics Data System (ADS)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  18. MSX-3D: a tool to validate 3D protein models using mass spectrometry.

    PubMed

    Heymann, Michaël; Paramelle, David; Subra, Gilles; Forest, Eric; Martinez, Jean; Geourjon, Christophe; Deléage, Gilbert

    2008-12-01

    The technique of chemical cross-linking followed by mass spectrometry has proven to bring valuable information about the protein structure and interactions between proteic subunits. It is an effective and efficient way to experimentally investigate some aspects of a protein structure when NMR and X-ray crystallography data are lacking. We introduce MSX-3D, a tool specifically geared to validate protein models using mass spectrometry. In addition to classical peptides identifications, it allows an interactive 3D visualization of the distance constraints derived from a cross-linking experiment. Freely available at http://proteomics-pbil.ibcp.fr

  19. Real time 3D visualization of intraoperative organ deformations using structured dictionary.

    PubMed

    Wang, Dan; Tewfik, Ahmed H

    2012-04-01

    Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.

  20. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. 3D pattern of brain atrophy in HIV/AIDS visualized using tensor-based morphometry

    PubMed Central

    Chiang, Ming-Chang; Dutton, Rebecca A.; Hayashi, Kiralee M.; Lopez, Oscar L.; Aizenstein, Howard J.; Toga, Arthur W.; Becker, James T.; Thompson, Paul M.

    2011-01-01

    35% of HIV-infected patients have cognitive impairment, but the profile of HIV-induced brain damage is still not well understood. Here we used tensor-based morphometry (TBM) to visualize brain deficits and clinical/anatomical correlations in HIV/AIDS. To perform TBM, we developed a new MRI-based analysis technique that uses fluid image warping, and a new α-entropy-based information-theoretic measure of image correspondence, called the Jensen–Rényi divergence (JRD). Methods 3D T1-weighted brain MRIs of 26 AIDS patients (CDC stage C and/or 3 without HIV-associated dementia; 47.2 ± 9.8 years; 25M/1F; CD4+ T-cell count: 299.5 ± 175.7/µl; log10 plasma viral load: 2.57 ± 1.28 RNA copies/ml) and 14 HIV-seronegative controls (37.6 ± 12.2 years; 8M/6F) were fluidly registered by applying forces throughout each deforming image to maximize the JRD between it and a target image (from a control subject). The 3D fluid registration was regularized using the linearized Cauchy–Navier operator. Fine-scale volumetric differences between diagnostic groups were mapped. Regions were identified where brain atrophy correlated with clinical measures. Results Severe atrophy (~15–20% deficit) was detected bilaterally in the primary and association sensorimotor areas. Atrophy of these regions, particularly in the white matter, correlated with cognitive impairment (P=0.033) and CD4+ T-lymphocyte depletion (P=0.005). Conclusion TBM facilitates 3D visualization of AIDS neuropathology in living patients scanned with MRI. Severe atrophy in frontoparietal and striatal areas may underlie early cognitive dysfunction in AIDS patients, and may signal the imminent onset of AIDS dementia complex. PMID:17035049

  2. Evaluation of TanDEM-X DEMs on selected Brazilian sites: Comparison with SRTM, ASTER GDEM and ALOS AW3D30

    NASA Astrophysics Data System (ADS)

    Grohmann, Carlos H.

    2018-06-01

    A first assessment of the TanDEM-X DEMs over Brazilian territory is presented through a comparison with SRTM, ASTER GDEM and ALOS AW3D30 DEMs in seven study areas with distinct geomorphological contexts, vegetation coverage and land use. Visual analysis and elevation histograms point to a finer effective spatial resolution of TanDEM-X compared to SRTM and ASTER GDEM. In areas of open vegetation, TanDEM-X lower elevations indicate a better penetration of the radar signal. DEMs of differences (DoDs) allowed the identification of issues inherent to the production methods of the analyzed DEMs, such as mast oscillations in SRTM data and mismatch between adjacent scenes in ASTER GDEM and ALOS AW3D30. A systematic difference in elevations between TanDEM-X 12m, TanDEM-X 30m and SRTM was observed in the steep slopes of the coastal ranges, related to the moving-window process used to resample the 12m data to a 30m pixel size. Due its simplicity, it is strongly recommended to produce a DoD with SRTM before using ASTER GDEM or ALOS AW3D30 in any analysis, to evaluate if the area of interest is affected by these problems. The DoDs also highlighted changes in land use in the time span between the acquisition of SRTM (2000) and TanDEM-X (2013) data, whether by natural causes or by human interference in the environment.

  3. Clinical evaluation of accommodation and ocular surface stability relavant to visual asthenopia with 3D displays

    PubMed Central

    2014-01-01

    Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686

  4. Revisiting flow maps: a classification and a 3D alternative to visual clutter

    NASA Astrophysics Data System (ADS)

    Gu, Yuhang; Kraak, Menno-Jan; Engelhardt, Yuri

    2018-05-01

    Flow maps have long been servicing people in exploring movement by representing origin-destination data (OD data). Due to recent developments in data collecting techniques the amount of movement data is increasing dramatically. With such huge amounts of data, visual clutter in flow maps is becoming a challenge. This paper revisits flow maps, provides an overview of the characteristics of OD data and proposes a classification system for flow maps. For dealing with problems of visual clutter, 3D flow maps are proposed as potential alternative to 2D flow maps.

  5. 3D visualization and stereographic techniques for medical research and education.

    PubMed

    Rydmark, M; Kling-Petersen, T; Pascher, R; Philip, F

    2001-01-01

    While computers have been able to work with true 3D models for a long time, the same does not apply to the users in common. Over the years, a number of 3D visualization techniques have been developed to enable a scientist or a student, to see not only a flat representation of an object, but also an approximation of its Z-axis. In addition to the traditional flat image representation of a 3D object, at least four established methodologies exist: Stereo pairs. Using image analysis tools or 3D software, a set of images can be made, each representing the left and the right eye view of an object. Placed next to each other and viewed through a separator, the three dimensionality of an object can be perceived. While this is usually done on still images, tests at Mednet have shown this to work with interactively animated models as well. However, this technique requires some training and experience. Pseudo3D, such as VRML or QuickTime VR, where the interactive manipulation of a 3D model lets the user achieve a sense of the model's true proportions. While this technique works reasonably well, it is not a "true" stereographic visualization technique. Red/Green separation, i.e. "the traditional 3D image" where a red and a green representation of a model is superimposed at an angle corresponding to the viewing angle of the eyes and by using a similar set of eyeglasses, a person can create a mental 3D image. The end result does produce a sense of 3D but the effect is difficult to maintain. Alternating left/right eye systems. These systems (typified by the StereoGraphics CrystalEyes system) let the computer display a "left eye" image followed by a "right eye" image while simultaneously triggering the eyepiece to alternatively make one eye "blind". When run at 60 Hz or higher, the brain will fuse the left/right images together and the user will effectively see a 3D object. Depending on configurations, the alternating systems run at between 50 and 60 Hz, thereby creating a

  6. NASA VERVE: Interactive 3D Visualization Within Eclipse

    NASA Technical Reports Server (NTRS)

    Cohen, Tamar; Allan, Mark B.

    2014-01-01

    At NASA, we develop myriad Eclipse RCP applications to provide situational awareness for remote systems. The Intelligent Robotics Group at NASA Ames Research Center has developed VERVE - a high-performance, robot user interface that provides scientists, robot operators, and mission planners with powerful, interactive 3D displays of remote environments.VERVE includes a 3D Eclipse view with an embedded Java Ardor3D scenario, including SWT and mouse controls which interact with the Ardor3D camera and objects in the scene. VERVE also includes Eclipse views for exploring and editing objects in the Ardor3D scene graph, and a HUD (Heads Up Display) framework allows Growl-style notifications and other textual information to be overlayed onto the 3D scene. We use VERVE to listen to telemetry from robots and display the robots and associated scientific data along the terrain they are exploring; VERVE can be used for any interactive 3D display of data.VERVE is now open source. VERVE derives from the prior Viz system, which was developed for Mars Polar Lander (2001) and used for the Mars Exploration Rover (2003) and the Phoenix Lander (2008). It has been used for ongoing research with IRG's K10 and KRex rovers in various locations. VERVE was used on the International Space Station during two experiments in 2013 - Surface Telerobotics, in which astronauts controlled robots on Earth from the ISS, and SPHERES, where astronauts control a free flying robot on board the ISS.We will show in detail how to code with VERVE, how to interact between SWT controls to the Ardor3D scenario, and share example code.

  7. Noninvasive CT to Iso-C3D registration for improved intraoperative visualization in computer assisted orthopedic surgery

    NASA Astrophysics Data System (ADS)

    Rudolph, Tobias; Ebert, Lars; Kowal, Jens

    2006-03-01

    Supporting surgeons in performing minimally invasive surgeries can be considered as one of the major goals of computer assisted surgery. Excellent intraoperative visualization is a prerequisite to achieve this aim. The Siremobil Iso-C 3D has become a widely used imaging device, which, in combination with a navigation system, enables the surgeon to directly navigate within the acquired 3D image volume without any extra registration steps. However, the image quality is rather low compared to a CT scan and the volume size (approx. 12 cm 3) limits its application. A regularly used alternative in computer assisted orthopedic surgery is to use of a preoperatively acquired CT scan to visualize the operating field. But, the additional registration step, necessary in order to use CT stacks for navigation is quite invasive. Therefore the objective of this work is to develop a noninvasive registration technique. In this article a solution is being proposed that registers a preoperatively acquired CT scan to the intraoperatively acquired Iso-C 3D image volume, thereby registering the CT to the tracked anatomy. The procedure aligns both image volumes by maximizing the mutual information, an algorithm that has already been applied to similar registration problems and demonstrated good results. Furthermore the accuracy of such a registration method was investigated in a clinical setup, integrating a navigated Iso-C 3D in combination with an tracking system. Initial tests based on cadaveric animal bone resulted in an accuracy ranging from 0.63mm to 1.55mm mean error.

  8. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  9. Evaluating a Novel 3D Stereoscopic Visual Display for Transanal Endoscopic Surgery: A Randomized Controlled Crossover Study.

    PubMed

    Di Marco, Aimee N; Jeyakumar, Jenifa; Pratt, Philip J; Yang, Guang-Zhong; Darzi, Ara W

    2016-01-01

    To compare surgical performance with transanal endoscopic surgery (TES) using a novel 3-dimensional (3D) stereoscopic viewer against the current modalities of a 3D stereoendoscope, 3D, and 2-dimensional (2D) high-definition monitors. TES is accepted as the primary treatment for selected rectal tumors. Current TES systems offer a 2D monitor, or 3D image, viewed directly via a stereoendoscope, necessitating an uncomfortable operating position. To address this and provide a platform for future image augmentation, a 3D stereoscopic display was created. Forty participants, of mixed experience level, completed a simulated TES task using 4 visual displays (novel stereoscopic viewer and currently utilized stereoendoscope, 3D, and 2D high-definition monitors) in a randomly allocated order. Primary outcome measures were: time taken, path length, and accuracy. Secondary outcomes were: task workload and participant questionnaire results. Median time taken and path length were significantly shorter for the novel viewer versus 2D and 3D, and not significantly different to the traditional stereoendoscope. Significant differences were found in accuracy, task workload, and questionnaire assessment in favor of the novel viewer, as compared to all 3 modalities. This novel 3D stereoscopic viewer allows surgical performance in TES equivalent to that achieved using the current stereoendoscope and superior to standard 2D and 3D displays, but with lower physical and mental demands for the surgeon. Participants expressed a preference for this system, ranking it more highly on a questionnaire. Clinical translation of this work has begun with the novel viewer being used in 5 TES patients.

  10. Visual shape perception as Bayesian inference of 3D object-centered shape representations.

    PubMed

    Erdogan, Goker; Jacobs, Robert A

    2017-11-01

    Despite decades of research, little is known about how people visually perceive object shape. We hypothesize that a promising approach to shape perception is provided by a "visual perception as Bayesian inference" framework which augments an emphasis on visual representation with an emphasis on the idea that shape perception is a form of statistical inference. Our hypothesis claims that shape perception of unfamiliar objects can be characterized as statistical inference of 3D shape in an object-centered coordinate system. We describe a computational model based on our theoretical framework, and provide evidence for the model along two lines. First, we show that, counterintuitively, the model accounts for viewpoint-dependency of object recognition, traditionally regarded as evidence against people's use of 3D object-centered shape representations. Second, we report the results of an experiment using a shape similarity task, and present an extensive evaluation of existing models' abilities to account for the experimental data. We find that our shape inference model captures subjects' behaviors better than competing models. Taken as a whole, our experimental and computational results illustrate the promise of our approach and suggest that people's shape representations of unfamiliar objects are probabilistic, 3D, and object-centered. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    NASA Astrophysics Data System (ADS)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  12. Scop3D: three-dimensional visualization of sequence conservation.

    PubMed

    Vermeire, Tessa; Vermaere, Stijn; Schepens, Bert; Saelens, Xavier; Van Gucht, Steven; Martens, Lennart; Vandermarliere, Elien

    2015-04-01

    The integration of a protein's structure with its known sequence variation provides insight on how that protein evolves, for instance in terms of (changing) function or immunogenicity. Yet, collating the corresponding sequence variants into a multiple sequence alignment, calculating each position's conservation, and mapping this information back onto a relevant structure is not straightforward. We therefore built the Sequence Conservation on Protein 3D structure (scop3D) tool to perform these tasks automatically. The output consists of two modified PDB files in which the B-values for each position are replaced by the percentage sequence conservation, or the information entropy for each position, respectively. Furthermore, text files with absolute and relative amino acid occurrences for each position are also provided, along with snapshots of the protein from six distinct directions in space. The visualization provided by scop3D can for instance be used as an aid in vaccine development or to identify antigenic hotspots, which we here demonstrate based on an analysis of the fusion proteins of human respiratory syncytial virus and mumps virus. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Aging and visual 3-D shape recognition from motion.

    PubMed

    Norman, J Farley; Adkins, Olivia C; Dowell, Catherine J; Hoyng, Stevie C; Shain, Lindsey M; Pedersen, Lauren E; Kinnard, Jonathan D; Higginbotham, Alexia J; Gilliam, Ashley N

    2017-11-01

    Two experiments were conducted to evaluate the ability of younger and older adults to recognize 3-D object shape from patterns of optical motion. In Experiment 1, participants were required to identify dotted surfaces that rotated in depth (i.e., surface structure portrayed using the kinetic depth effect). The task difficulty was manipulated by limiting the surface point lifetimes within the stimulus apparent motion sequences. In Experiment 2, the participants identified solid, naturally shaped objects (replicas of bell peppers, Capsicum annuum) that were defined by occlusion boundary contours, patterns of specular highlights, or combined optical patterns containing both boundary contours and specular highlights. Significant and adverse effects of increased age were found in both experiments. Despite the fact that previous research has found that increases in age do not reduce solid shape discrimination, our current results indicated that the same conclusion does not hold for shape identification. We demonstrated that aging results in a reduction in the ability to visually recognize 3-D shape independent of how the 3-D structure is defined (motions of isolated points, deformations of smooth optical fields containing specular highlights, etc.).

  14. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  15. Demonstration of three gorges archaeological relics based on 3D-visualization technology

    NASA Astrophysics Data System (ADS)

    Xu, Wenli

    2015-12-01

    This paper mainly focuses on the digital demonstration of three gorges archeological relics to exhibit the achievements of the protective measures. A novel and effective method based on 3D-visualization technology, which includes large-scaled landscape reconstruction, virtual studio, and virtual panoramic roaming, etc, is proposed to create a digitized interactive demonstration system. The method contains three stages: pre-processing, 3D modeling and integration. Firstly, abundant archaeological information is classified according to its history and geographical information. Secondly, build up a 3D-model library with the technology of digital images processing and 3D modeling. Thirdly, use virtual reality technology to display the archaeological scenes and cultural relics vividly and realistically. The present work promotes the application of virtual reality to digital projects and enriches the content of digital archaeology.

  16. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qiong; Gluch, Jürgen; Krüger, Peter

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less

  17. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  18. 3D kinematics of mobile-bearing total knee arthroplasty using X-ray fluoroscopy.

    PubMed

    Yamazaki, Takaharu; Futai, Kazuma; Tomita, Tetsuya; Sato, Yoshinobu; Yoshikawa, Hideki; Tamura, Shinichi; Sugamoto, Kazuomi

    2015-04-01

    Total knee arthroplasty (TKA) 3D kinematic analysis requires 2D/3D image registration of X-ray fluoroscopic images and a computer-aided design (CAD) model of the knee implant. However, these techniques cannot provide information on the radiolucent polyethylene insert, since the insert silhouette does not appear clearly in X-ray images. Therefore, it is difficult to obtain the 3D kinematics of the polyethylene insert, particularly the mobile-bearing insert. A technique for 3D kinematic analysis of a mobile-bearing insert used in TKA was developed using X-ray fluoroscopy. The method was tested and a clinical application was evaluated. Tantalum beads and a CAD model of the mobile-bearing TKA insert are used for 3D pose estimation of the mobile-bearing insert used in TKA using X-ray fluoroscopy. The insert model was created using four identical tantalum beads precisely located at known positions in a polyethylene insert using a specially designed insertion device. Finally, the 3D pose of the insert model was estimated using a feature-based 2D/3D registration technique, using the silhouette of beads in fluoroscopic images and the corresponding CAD insert model. In vitro testing for the repeatability of the positioning of the tantalum beads and computer simulations for 3D pose estimation of the mobile-bearing insert were performed. The pose estimation accuracy achieved was sufficient for analyzing mobile-bearing TKA kinematics (RMS error: within 1.0 mm and 1.0°, except for medial-lateral translation). In a clinical application, nine patients with mobile-bearing TKA were investigated and analyzed with respect to a deep knee bending motion. A 3D kinematic analysis technique was developed that enables accurate quantitative evaluation of mobile-bearing TKA kinematics. This method may be useful for improving implant design and optimizing TKA surgical techniques.

  19. 1000 X difference between current displays and capability of human visual system: payoff potential for affordable defense systems

    NASA Astrophysics Data System (ADS)

    Hopper, Darrel G.

    2000-08-01

    Displays were invented just in the last century. The human visual system evolved over millions of years. The disparity between the natural world 'display' and that 'sampled' by year 2000 technology is more than a factor of one million. Over 1000X of this disparity between the fidelity of current electronic displays and human visual capacity is in 2D resolution alone. Then there is true 3D, which adds an additional factor of over 1000X. The present paper focuses just on the 2D portion of this grand technology challenge. Should a significant portion of this gap be closed, say just 10X by 2010, display technology can help drive a revolution in military affairs. Warfighter productivity must grow dramatically, and improved display technology systems can create a critical opportunity to increase defense capability while decreasing crew sizes.

  20. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human

  1. The effects of task difficulty on visual search strategy in virtual 3D displays.

    PubMed

    Pomplun, Marc; Garaas, Tyler W; Carrasco, Marisa

    2013-08-28

    Analyzing the factors that determine our choice of visual search strategy may shed light on visual behavior in everyday situations. Previous results suggest that increasing task difficulty leads to more systematic search paths. Here we analyze observers' eye movements in an "easy" conjunction search task and a "difficult" shape search task to study visual search strategies in stereoscopic search displays with virtual depth induced by binocular disparity. Standard eye-movement variables, such as fixation duration and initial saccade latency, as well as new measures proposed here, such as saccadic step size, relative saccadic selectivity, and x-y target distance, revealed systematic effects on search dynamics in the horizontal-vertical plane throughout the search process. We found that in the "easy" task, observers start with the processing of display items in the display center immediately after stimulus onset and subsequently move their gaze outwards, guided by extrafoveally perceived stimulus color. In contrast, the "difficult" task induced an initial gaze shift to the upper-left display corner, followed by a systematic left-right and top-down search process. The only consistent depth effect was a trend of initial saccades in the easy task with smallest displays to the items closest to the observer. The results demonstrate the utility of eye-movement analysis for understanding search strategies and provide a first step toward studying search strategies in actual 3D scenarios.

  2. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  3. Development of 3D interactive visual objects using the Scripps Institution of Oceanography's Visualization Center

    NASA Astrophysics Data System (ADS)

    Kilb, D.; Reif, C.; Peach, C.; Keen, C. S.; Smith, B.; Mellors, R. J.

    2003-12-01

    Within the last year scientists and educators at the Scripps Institution of Oceanography (SIO), the Birch Aquarium at Scripps and San Diego State University have collaborated with education specialists to develop 3D interactive graphic teaching modules for use in the classroom and in teacher workshops at the SIO Visualization center (http://siovizcenter.ucsd.edu). The unique aspect of the SIO Visualization center is that the center is designed around a 120 degree curved Panoram floor-to-ceiling screen (8'6" by 28'4") that immerses viewers in a virtual environment. The center is powered by an SGI 3400 Onyx computer that is more powerful, by an order of magnitude in both speed and memory, than typical base systems currently used for education and outreach presentations. This technology allows us to display multiple 3D data layers (e.g., seismicity, high resolution topography, seismic reflectivity, draped interferometric synthetic aperture radar (InSAR) images, etc.) simultaneously, render them in 3D stereo, and take a virtual flight through the data as dictated on the spot by the user. This system can also render snapshots, images and movies that are too big for other systems, and then export smaller size end-products to more commonly used computer systems. Since early 2002, we have explored various ways to provide informal education and outreach focusing on current research presented directly by the researchers doing the work. The Center currently provides a centerpiece for instruction on southern California seismology for K-12 students and teachers for various Scripps education endeavors. Future plans are in place to use the Visualization Center at Scripps for extended K-12 and college educational programs. In particular, we will be identifying K-12 curriculum needs, assisting with teacher education, developing assessments of our programs and products, producing web-accessible teaching modules and facilitating the development of appropriate teaching tools to be

  4. Suitability of online 3D visualization technique in oil palm plantation management

    NASA Astrophysics Data System (ADS)

    Mat, Ruzinoor Che; Nordin, Norani; Zulkifli, Abdul Nasir; Yusof, Shahrul Azmi Mohd

    2016-08-01

    Oil palm industry has been the backbone for the growth of Malaysia economy. The exports of this commodity increasing almost every year. Therefore, there are many studies focusing on how to help this industry increased its productivity. In order to increase the productivity, the management of oil palm plantation need to be improved and strengthen. One of the solution in helping the oil palm manager is by implementing online 3D visualization technique for oil palm plantation using game engine technology. The potential of this application is that it can helps in fertilizer and irrigation management. For this reason, the aim of this paper is to investigate the issues in managing oil palm plantation from the view of oil palm manager by interview. The results from this interview will helps in identifying the suitable issues could be highlight in implementing online 3D visualization technique for oil palm plantation management.

  5. Study of the structure of 3-D composites based on carbon nanotubes in bovine serum albumin matrix by X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Ignatov, D.; Zhurbina, N.; Gerasimenko, A.

    2017-01-01

    3-D composites are widely used in tissue engineering. A comprehensive analysis by X-ray microtomography was conducted to study the structure of the 3-D composites. Comprehensive analysis of the structure of the 3-D composites consisted of scanning, image reconstruction of shadow projections, two-dimensional and three-dimensional visualization of the reconstructed images and quantitative analysis of the samples. Experimental samples of composites were formed by laser vaporization of the aqueous dispersion BSA and single-walled (SWCNTs) and multi-layer (MWCNTs) carbon nanotubes. The samples have a homogeneous structure over the entire volume, the percentage of porosity of 3-D composites based on SWCNTs and MWCNTs - 16.44%, 28.31%, respectively. An average pore diameter of 3-D composites based on SWCNTs and MWCNTs - 45 μm 93 μm. 3-D composites based on carbon nanotubes in bovine serum albumin matrix can be used in tissue engineering of bone and cartilage, providing cell proliferation and blood vessel sprouting.

  6. Real-Space Bonding Indicator Analysis of the Donor-Acceptor Complexes X3BNY3, X3AlNY3, X3BPY3, and X3AlPY3 (X, Y = H, Me, Cl).

    PubMed

    Mebs, Stefan; Beckmann, Jens

    2017-10-12

    Calculations of real-space bonding indicators (RSBI) derived from Atoms-In-Molecules (AIM), Electron Localizability Indicator (ELI-D), Non-Covalent Interactions index (NCI), and Density Overlap Regions Indicator (DORI) toolkits for a set of 36 donor-acceptor complexes X 3 BNY 3 (1, 1a-1h), X 3 AlNY 3 (2, 2a-2h), X 3 BPY 3 (3, 3a-3h), and X 3 AlPY 3 (4, 4a-4h) reveal that the donor-acceptor bonds comprise covalent and ionic interactions in varying extents (X = Y = H for 1-4; X = H, Y = Me for 1a-4a; X = H, Y = Cl for 1b-4b; X = Me, Y = H for 1c-4c; X, Y = Me for 1d-4d; X = Me, Y = Cl for 1e-4e; X = Cl, Y = H for 1f-4f; X = Cl, Y = Me for 1g-4g; X, Y = Cl for 1h-4h). The phosphinoboranes X 3 BPY 3 (3, 3a-3h) in general and Cl 3 BPMe 3 (3f) in particular show the largest covalent contributions and the least ionic contributions. The aminoalanes X 3 AlNY 3 (2, 2a-2h) in general and Me 3 AlNCl 3 (2e) in particular show the least covalent contributions and the largest ionic contributions. The aminoboranes X 3 BNY 3 (1, 1a-1h) and the phosphinoalanes X 3 AlPY 3 (4, 4a-4h) are midway between phosphinoboranes and aminoalanes. The degree of covalency and ionicity correlates with the electronegativity difference BP (ΔEN = 0.15) < AlP (ΔEN = 0.58) < BN (ΔEN = 1.00) < AlN (ΔEN = 1.43) and a previously published energy decomposition analysis (EDA). To illustrate the importance of both contributions in Lewis formula representations, two resonance formulas should be given for all compounds, namely, the canonical form with formal charges denoting covalency and the arrow notation pointing from the donor to the acceptor atom to emphasis ionicity. If the Lewis formula mainly serves to show the atomic connectivity, the most significant should be shown. Thus, it is legitimate to present aminoalanes using arrows; however, for phosphinoboranes the canonical form with formal charges is more appropriate.

  7. The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L

    2016-11-01

    Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.

  8. [3D visualization and analysis of vocal fold dynamics].

    PubMed

    Bohr, C; Döllinger, M; Kniesburges, S; Traxdorf, M

    2016-04-01

    Visual investigation methods of the larynx mainly allow for the two-dimensional presentation of the three-dimensional structures of the vocal fold dynamics. The vertical component of the vocal fold dynamics is often neglected, yielding a loss of information. The latest studies show that the vertical dynamic components are in the range of the medio-lateral dynamics and play a significant role within the phonation process. This work presents a method for future 3D reconstruction and visualization of endoscopically recorded vocal fold dynamics. The setup contains a high-speed camera (HSC) and a laser projection system (LPS). The LPS projects a regular grid on the vocal fold surfaces and in combination with the HSC allows a three-dimensional reconstruction of the vocal fold surface. Hence, quantitative information on displacements and velocities can be provided. The applicability of the method is presented for one ex-vivo human larynx, one ex-vivo porcine larynx and one synthetic silicone larynx. The setup introduced allows the reconstruction of the entire visible vocal fold surfaces for each oscillation status. This enables a detailed analysis of the three dimensional dynamics (i. e. displacements, velocities, accelerations) of the vocal folds. The next goal is the miniaturization of the LPS to allow clinical in-vivo analysis in humans. We anticipate new insight on dependencies between 3D dynamic behavior and the quality of the acoustic outcome for healthy and disordered phonation.

  9. Design and implementation of a 3D ocean virtual reality and visualization engine

    NASA Astrophysics Data System (ADS)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  10. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  11. Visualization of x-ray computer tomography using computer-generated holography

    NASA Astrophysics Data System (ADS)

    Daibo, Masahiro; Tayama, Norio

    1998-09-01

    The theory converted from x-ray projection data to the hologram directly by combining the computer tomography (CT) with the computer generated hologram (CGH), is proposed. The purpose of this study is to offer the theory for realizing the all- electronic and high-speed seeing through 3D visualization system, which is for the application to medical diagnosis and non- destructive testing. First, the CT is expressed using the pseudo- inverse matrix which is obtained by the singular value decomposition. CGH is expressed in the matrix style. Next, `projection to hologram conversion' (PTHC) matrix is calculated by the multiplication of phase matrix of CGH with pseudo-inverse matrix of the CT. Finally, the projection vector is converted to the hologram vector directly, by multiplication of the PTHC matrix with the projection vector. Incorporating holographic analog computation into CT reconstruction, it becomes possible that the calculation amount is drastically reduced. We demonstrate the CT cross section which is reconstituted by He-Ne laser in the 3D space from the real x-ray projection data acquired by x-ray television equipment, using our direct conversion technique.

  12. Visualization of hepatic arteries with 3D ultrasound during intra-arterial therapies

    NASA Astrophysics Data System (ADS)

    Gérard, Maxime; Tang, An; Badoual, Anaïs.; Michaud, François; Bigot, Alexandre; Soulez, Gilles; Kadoury, Samuel

    2016-03-01

    Liver cancer represents the second most common cause of cancer-related mortality worldwide. The prognosis is poor with an overall mortality of 95%. Moreover, most hepatic tumors are unresectable due to their advanced stage at discovery or poor underlying liver function. Tumor embolization by intra-arterial approaches is the current standard of care for advanced cases of hepatocellular carcinoma. These therapies rely on the fact that the blood supply of primary hepatic tumors is predominantly arterial. Feedback on blood flow velocities in the hepatic arteries is crucial to ensure maximal treatment efficacy on the targeted masses. Based on these velocities, the intra-arterial injection rate is modulated for optimal infusion of the chemotherapeutic drugs into the tumorous tissue. While Doppler ultrasound is a well-documented technique for the assessment of blood flow, 3D visualization of vascular anatomy with ultrasound remains challenging. In this paper we present an image-guidance pipeline that enables the localization of the hepatic arterial branches within a 3D ultrasound image of the liver. A diagnostic Magnetic resonance angiography (MRA) is first processed to automatically segment the hepatic arteries. A non-rigid registration method is then applied on the portal phase of the MRA volume with a 3D ultrasound to enable the visualization of the 3D mesh of the hepatic arteries in the Doppler images. To evaluate the performance of the proposed workflow, we present initial results from porcine models and patient images.

  13. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  14. Multiscale microstructural characterization of Sn-rich alloys by three dimensional (3D) X-ray synchrotron tomography and focused ion beam (FIB) tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yazzie, K.E.; Williams, J.J.; Phillips, N.C.

    2012-08-15

    Sn-rich (Pb-free) alloys serve as electrical and mechanical interconnects in electronic packaging. It is critical to quantify the microstructures of Sn-rich alloys to obtain a fundamental understanding of their properties. In this work, the intermetallic precipitates in Sn-3.5Ag and Sn-0.7Cu, and globular lamellae in Sn-37Pb solder joints were visualized and quantified using 3D X-ray synchrotron tomography and focused ion beam (FIB) tomography. 3D reconstructions were analyzed to extract statistics on particle size and spatial distribution. In the Sn-Pb alloy the interconnectivity of Sn-rich and Pb-rich constituents was quantified. It will be shown that multiscale characterization using 3D X-ray and FIBmore » tomography enabled the characterization of the complex morphology, distribution, and statistics of precipitates and contiguous phases over a range of length scales. - Highlights: Black-Right-Pointing-Pointer Multiscale characterization by X-ray synchrotron and focused ion beam tomography. Black-Right-Pointing-Pointer Characterized microstructural features in several Sn-based alloys. Black-Right-Pointing-Pointer Quantified size, fraction, and clustering of microstructural features.« less

  15. Influence of Eddy Current, Maxwell and Gradient Field Corrections on 3D Flow Visualization of 3D CINE PC-MRI Data

    PubMed Central

    Lorenz, R.; Bock, J.; Snyder, J.; Korvink, J.G.; Jung, B.A.; Markl, M.

    2013-01-01

    Purpose The measurement of velocities based on PC-MRI can be subject to different phase offset errors which can affect the accuracy of velocity data. The purpose of this study was to determine the impact of these inaccuracies and to evaluate different correction strategies on 3D visualization. Methods PC-MRI was performed on a 3 T system (Siemens Trio) for in vitro (curved/straight tube models; venc: 0.3 m/s) and in vivo (aorta/intracranial vasculature; venc: 1.5/0.4 m/s) data. For comparison of the impact of different magnetic field gradient designs, in vitro data was additionally acquired on a wide bore 1.5 T system (Siemens Espree). Different correction methods were applied to correct for eddy currents, Maxwell terms and gradient field inhomogeneities. Results The application of phase offset correction methods lead to an improvement of 3D particle trace visualization and count. The most pronounced differences were found for in vivo/in vitro data (68%/82% more particle traces) acquired with a low venc (0.3 m/s/0.4 m/s, respectively). In vivo data acquired with high venc (1.5 m/s) showed noticeable but only minor improvement. Conclusion This study suggests that the correction of phase offset errors can be important for a more reliable visualization of particle traces but is strongly dependent on the velocity sensitivity, object geometry, and gradient coil design. PMID:24006013

  16. Quantifying fracture geometry with X-ray tomography: Technique of Iterative Local Thresholding (TILT) for 3D image segmentation

    DOE PAGES

    Deng, Hang; Fitts, Jeffrey P.; Peters, Catherine A.

    2016-02-01

    This paper presents a new method—the Technique of Iterative Local Thresholding (TILT)—for processing 3D X-ray computed tomography (xCT) images for visualization and quantification of rock fractures. The TILT method includes the following advancements. First, custom masks are generated by a fracture-dilation procedure, which significantly amplifies the fracture signal on the intensity histogram used for local thresholding. Second, TILT is particularly well suited for fracture characterization in granular rocks because the multi-scale Hessian fracture (MHF) filter has been incorporated to distinguish fractures from pores in the rock matrix. Third, TILT wraps the thresholding and fracture isolation steps in an optimized iterativemore » routine for binary segmentation, minimizing human intervention and enabling automated processing of large 3D datasets. As an illustrative example, we applied TILT to 3D xCT images of reacted and unreacted fractured limestone cores. Other segmentation methods were also applied to provide insights regarding variability in image processing. The results show that TILT significantly enhanced separability of grayscale intensities, outperformed the other methods in automation, and was successful in isolating fractures from the porous rock matrix. Because the other methods are more likely to misclassify fracture edges as void and/or have limited capacity in distinguishing fractures from pores, those methods estimated larger fracture volumes (up to 80 %), surface areas (up to 60 %), and roughness (up to a factor of 2). In conclusion, these differences in fracture geometry would lead to significant disparities in hydraulic permeability predictions, as determined by 2D flow simulations.« less

  17. Research on steady-state visual evoked potentials in 3D displays

    NASA Astrophysics Data System (ADS)

    Chien, Yu-Yi; Lee, Chia-Ying; Lin, Fang-Cheng; Huang, Yi-Pai; Ko, Li-Wei; Shieh, Han-Ping D.

    2015-05-01

    Brain-computer interfaces (BCIs) are intuitive systems for users to communicate with outer electronic devices. Steady state visual evoked potential (SSVEP) is one of the common inputs for BCI systems due to its easy detection and high information transfer rates. An advanced interactive platform integrated with liquid crystal displays is leading a trend to provide an alternative option not only for the handicapped but also for the public to make our lives more convenient. Many SSVEP-based BCI systems have been studied in a 2D environment; however there is only little literature about SSVEP-based BCI systems using 3D stimuli. 3D displays have potentials in SSVEP-based BCI systems because they can offer vivid images, good quality in presentation, various stimuli and more entertainment. The purpose of this study was to investigate the effect of two important 3D factors (disparity and crosstalk) on SSVEPs. Twelve participants participated in the experiment with a patterned retarder 3D display. The results show that there is a significant difference (p-value<0.05) between large and small disparity angle, and the signal-to-noise ratios (SNRs) of small disparity angles is higher than those of large disparity angles. The 3D stimuli with smaller disparity and lower crosstalk are more suitable for applications based on the results of 3D perception and SSVEP responses (SNR). Furthermore, we can infer the 3D perception of users by SSVEP responses, and modify the proper disparity of 3D images automatically in the future.

  18. Shutdown Dose Rate Analysis for the long-pulse D-D Operation Phase in KSTAR

    NASA Astrophysics Data System (ADS)

    Park, Jin Hun; Han, Jung-Hoon; Kim, D. H.; Joo, K. S.; Hwang, Y. S.

    2017-09-01

    KSTAR is a medium size fully superconducting tokamak. The deuterium-deuterium (D-D) reaction in the KSTAR tokamak generates neutrons with a peak yield of 3.5x1016 per second through a pulse operation of 100 seconds. The effect of neutron generation from full D-D high power KSTAR operation mode to the machine, such as activation, shutdown dose rate, and nuclear heating, are estimated for an assurance of safety during operation, maintenance, and machine upgrade. The nuclear heating of the in-vessel components, and neutron activation of the surrounding materials have been investigated. The dose rates during operation and after shutdown of KSTAR have been calculated by a 3D CAD model of KSTAR with the Monte Carlo code MCNP5 (neutron flux and decay photon), the inventory code FISPACT (activation and decay photon) and the FENDL 2.1 nuclear data library.

  19. A biplanar X-ray approach for studying the 3D dynamics of human track formation.

    PubMed

    Hatala, Kevin G; Perry, David A; Gatesy, Stephen M

    2018-05-09

    Recent discoveries have made hominin tracks an increasingly prevalent component of the human fossil record, and these data have the capacity to inform long-standing debates regarding the biomechanics of hominin locomotion. However, there is currently no consensus on how to decipher biomechanical variables from hominin tracks. These debates can be linked to our generally limited understanding of the complex interactions between anatomy, motion, and substrate that give rise to track morphology. These interactions are difficult to study because direct visualization of the track formation process is impeded by foot and substrate opacity. To address these obstacles, we developed biplanar X-ray and computer animation methods, derived from X-ray Reconstruction of Moving Morphology (XROMM), to analyze the 3D dynamics of three human subjects' feet as they walked across four substrates (three deformable muds and rigid composite panel). By imaging and reconstructing 3D positions of external markers, we quantified the 3D dynamics at the foot-substrate interface. Foot shape, specifically heel and medial longitudinal arch deformation, was significantly affected by substrate rigidity. In deformable muds, we found that depths measured across tracks did not directly reflect the motions of the corresponding regions of the foot, and that track outlines were not perfectly representative of foot size. These results highlight the complex, dynamic nature of track formation, and the experimental methods presented here offer a promising avenue for developing and refining methods for accurately inferring foot anatomy and gait biomechanics from fossil hominin tracks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. The Visual Representation of 3D Object Orientation in Parietal Cortex

    PubMed Central

    Cowan, Noah J.; Angelaki, Dora E.

    2013-01-01

    An accurate representation of three-dimensional (3D) object orientation is essential for interacting with the environment. Where and how the brain visually encodes 3D object orientation remains unknown, but prior studies suggest the caudal intraparietal area (CIP) may be involved. Here, we develop rigorous analytical methods for quantifying 3D orientation tuning curves, and use these tools to the study the neural coding of surface orientation. Specifically, we show that single neurons in area CIP of the rhesus macaque jointly encode the slant and tilt of a planar surface, and that across the population, the distribution of preferred slant-tilts is not statistically different from uniform. This suggests that all slant-tilt combinations are equally represented in area CIP. Furthermore, some CIP neurons are found to also represent the third rotational degree of freedom that determines the orientation of the image pattern on the planar surface. Together, the present results suggest that CIP is a critical neural locus for the encoding of all three rotational degrees of freedom specifying an object's 3D spatial orientation. PMID:24305830

  1. 3D visualization of numeric planetary data using JMARS

    NASA Astrophysics Data System (ADS)

    Dickenshied, S.; Christensen, P. R.; Anwar, S.; Carter, S.; Hagee, W.; Noss, D.

    2013-12-01

    JMARS (Java Mission-planning and Analysis for Remote Sensing) is a free geospatial application developed by the Mars Space Flight Facility at Arizona State University. Originally written as a mission planning tool for the THEMIS instrument on board the MARS Odyssey Spacecraft, it was released as an analysis tool to the general public in 2003. Since then it has expanded to be used for mission planning and scientific data analysis by additional NASA missions to Mars, the Moon, and Vesta, and it has come to be used by scientists, researchers and students of all ages from more than 40 countries around the world. The public version of JMARS now also includes remote sensing data for Mercury, Venus, Earth, the Moon, Mars, and a number of the moons of Jupiter and Saturn. Additional datasets for asteroids and other smaller bodies are being added as they becomes available and time permits. In addition to visualizing multiple datasets in context with one another, significant effort has been put into on-the-fly projection of georegistered data over surface topography. This functionality allows a user to easily create and modify 3D visualizations of any regional scene where elevation data is available in JMARS. This can be accomplished through the use of global topographic maps or regional numeric data such as HiRISE or HRSC DTMs. Users can also upload their own regional or global topographic dataset and use it as an elevation source for 3D rendering of their scene. The 3D Layer in JMARS allows the user to exaggerate the z-scale of any elevation source to emphasize the vertical variance throughout a scene. In addition, the user can rotate, tilt, and zoom the scene to any desired angle and then illuminate it with an artificial light source. This scene can be easily overlain with additional JMARS datasets such as maps, images, shapefiles, contour lines, or scale bars, and the scene can be easily saved as a graphic image for use in presentations or publications.

  2. Comparison of User Performance with Interactive and Static 3d Visualization - Pilot Study

    NASA Astrophysics Data System (ADS)

    Herman, L.; Stachoň, Z.

    2016-06-01

    Interactive 3D visualizations of spatial data are currently available and popular through various applications such as Google Earth, ArcScene, etc. Several scientific studies have focused on user performance with 3D visualization, but static perspective views are used as stimuli in most of the studies. The main objective of this paper is to try to identify potential differences in user performance with static perspective views and interactive visualizations. This research is an exploratory study. An experiment was designed as a between-subject study and a customized testing tool based on open web technologies was used for the experiment. The testing set consists of an initial questionnaire, a training task and four experimental tasks. Selection of the highest point and determination of visibility from the top of a mountain were used as the experimental tasks. Speed and accuracy of each task performance of participants were recorded. The movement and actions in the virtual environment were also recorded within the interactive variant. The results show that participants deal with the tasks faster when using static visualization. The average error rate was also higher in the static variant. The findings from this pilot study will be used for further testing, especially for formulating of hypotheses and designing of subsequent experiments.

  3. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  4. Novel 3D/VR interactive environment for MD simulations, visualization and analysis.

    PubMed

    Doblack, Benjamin N; Allis, Tim; Dávila, Lilian P

    2014-12-18

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced.

  5. Novel 3D/VR Interactive Environment for MD Simulations, Visualization and Analysis

    PubMed Central

    Doblack, Benjamin N.; Allis, Tim; Dávila, Lilian P.

    2014-01-01

    The increasing development of computing (hardware and software) in the last decades has impacted scientific research in many fields including materials science, biology, chemistry and physics among many others. A new computational system for the accurate and fast simulation and 3D/VR visualization of nanostructures is presented here, using the open-source molecular dynamics (MD) computer program LAMMPS. This alternative computational method uses modern graphics processors, NVIDIA CUDA technology and specialized scientific codes to overcome processing speed barriers common to traditional computing methods. In conjunction with a virtual reality system used to model materials, this enhancement allows the addition of accelerated MD simulation capability. The motivation is to provide a novel research environment which simultaneously allows visualization, simulation, modeling and analysis. The research goal is to investigate the structure and properties of inorganic nanostructures (e.g., silica glass nanosprings) under different conditions using this innovative computational system. The work presented outlines a description of the 3D/VR Visualization System and basic components, an overview of important considerations such as the physical environment, details on the setup and use of the novel system, a general procedure for the accelerated MD enhancement, technical information, and relevant remarks. The impact of this work is the creation of a unique computational system combining nanoscale materials simulation, visualization and interactivity in a virtual environment, which is both a research and teaching instrument at UC Merced. PMID:25549300

  6. Tunable White-Light Emission in Single-Cation-Templated Three-Layered 2D Perovskites (CH 3 CH 2 NH 3 ) 4 Pb 3 Br 10–x Cl x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Lingling; Wu, Yilei; Stoumpos, Constantinos C.

    Two-dimensional (2D) hybrid halide perovskites come as a family (B) 2(A) n-1PbnX 3n+1 (B and A= cations; X= halide). These perovskites are promising semiconductors for solar cells and optoelectronic applications. Among the fascinating properties of these materials is white-light emission, which has been mostly observed in single-layered 2D lead bromide or chloride systems (n = 1), where the broad emission comes from the transient photoexcited states generated by self-trapped excitons (STEs) from structural distortion. Here we report a multilayered 2D perovskite (n = 3) exhibiting a tunable white-light emission. Ethylammonium (EA+) can stabilize the 2D perovskite structure in EA 4Pbmore » 3Br 10–xCl x (x = 0, 2, 4, 6, 8, 9.5, and 10) with EA + being both the A and B cations in this system. Because of the larger size of EA, these materials show a high distortion level in their inorganic structures, with EA4Pb3Cl10 having a much larger distortion than that of EA 4Pb 3Br 10, which results in broadband white-light emission of EA 4Pb 3Cl 10 in contrast to narrow blue emission of EA4Pb3Br10. The average lifetime of the series decreases gradually from the Cl end to the Br end, indicating that the larger distortion also prolongs the lifetime (more STE states). The band gap of EA 4Pb 3Br 10–xCl x ranges from 3.45 eV (x = 10) to 2.75 eV (x = 0), following Vegard’s law. First-principles density functional theory calculations (DFT) show that both EA 4Pb 3Cl 10 and EA 4Pb 3Br 10 are direct band gap semiconductors. The color rendering index (CRI) of the series improves from 66 (EA 4Pb 3Cl 10) to 83 (EA 4Pb 3Br 0.5Cl 9.5), displaying high tunability and versatility of the title compounds.« less

  7. Impacts of a CAREER Award on Advancing 3D Visualization in Geology Education

    NASA Astrophysics Data System (ADS)

    Billen, M. I.

    2011-12-01

    CAREER awards provide a unique opportunity to develop educational activities as an integrated part of one's research activities. This CAREER award focused on developing interactive 3D visualization tools to aid geology students in improving their 3D visualization skills. Not only is this a key skill for field geologists who need to visualize unseen subsurface structures, but it is also an important aspect of geodynamic research into the processes, such as faulting and viscous flow, that occur during subduction. Working with an undergraduate student researcher and using the KeckCAVES developed volume visualization code 3DVisualizer, we have developed interactive visualization laboratory exercises (e.g., Discovering the Rule of Vs) and a suite of mini-exercises using illustrative 3D geologic structures (e.g., syncline, thrust fault) that students can explore (e.g., rotate, slice, cut-away) to understand how exposure of these structures at the surface can provide insight into the subsurface structure. These exercises have been integrated into the structural geology curriculum and made available on the web through the KeckCAVES Education website as both data-and-code downloads and pre-made movies. One of the main challenges of implementing research and education activities through the award is that progress must be made on both throughout the award period. Therefore, while our original intent was to use subduction model output as the structures in the educational models, delays in the research results required that we develop these models using other simpler input data sets. These delays occurred because one of the other goals of the CAREER grant is to allow the faculty to take their research in a new direction, which may certainly lead to transformative science, but can also lead to more false-starts as the challenges of doing the new science are overcome. However, having created the infrastructure for the educational components, use of the model results in future

  8. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  9. Recent developments in stereoscopic and holographic 3D display technologies

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri

    2014-06-01

    Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.

  10. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  11. PLOT3D/AMES, APOLLO UNIX VERSION USING GMR3D (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  13. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Debris Dispersion Model Using Java 3D

    NASA Technical Reports Server (NTRS)

    Thirumalainambi, Rajkumar; Bardina, Jorge

    2004-01-01

    This paper describes web based simulation of Shuttle launch operations and debris dispersion. Java 3D graphics provides geometric and visual content with suitable mathematical model and behaviors of Shuttle launch. Because the model is so heterogeneous and interrelated with various factors, 3D graphics combined with physical models provides mechanisms to understand the complexity of launch and range operations. The main focus in the modeling and simulation covers orbital dynamics and range safety. Range safety areas include destruct limit lines, telemetry and tracking and population risk near range. If there is an explosion of Shuttle during launch, debris dispersion is explained. The shuttle launch and range operations in this paper are discussed based on the operations from Kennedy Space Center, Florida, USA.

  15. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  16. The GPlates Portal: Cloud-based interactive 3D and 4D visualization of global geological and geophysical data and models in a browser

    NASA Astrophysics Data System (ADS)

    Müller, Dietmar; Qin, Xiaodong; Sandwell, David; Dutkiewicz, Adriana; Williams, Simon; Flament, Nicolas; Maus, Stefan; Seton, Maria

    2017-04-01

    stimulate teaching and learning and novel avenues of inquiry. This technology offers many future opportunities for providing additional functionality, especially on-the-fly big data analytics. Müller, R.D., Qin, X., Sandwell, D.T., Dutkiewicz, A., Williams, S.E., Flament, N., Maus, S. and Seton, M, 2016, The GPlates Portal: Cloud-based interactive 3D visualization of global geophysical and geological data in a web browser, PLoS ONE 11(3): e0150883. doi:10.1371/ journal.pone.0150883

  17. Fabrication of 3D SiO x structures using patterned PMMA sacrificial layer

    NASA Astrophysics Data System (ADS)

    Li, Zhiqin; Xiang, Quan; Zheng, Mengjie; Bi, Kaixi; Chen, Yiqin; Chen, Keqiu; Duan, Huigao

    2018-02-01

    Three-dimensional (3D) nanofabrication based on electron-beam lithography (EBL) has drawn wide attention for various applications with its high patterning resolution and design flexibility. In this work, we present a bilayer EBL process to obtain 3D freestanding SiO x structures via the release of the bottom sacrificial layer. This new kind of bilayer process enables us to define various 3D freestanding SiO x structures with high resolution and low edge roughness. As a proof of concept for applications, metal-coated freestanding SiO x microplates with an underlying air gap were fabricated to form asymmetric Fabry-Perot resonators, which can be utilized for colorimetric refractive index sensing and thus also have application potential for biochemical detection, anti-counterfeiting and smart active nano-optical devices.

  18. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  19. PLOT3D/AMES, SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. PLOT3D/AMES, SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  1. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  2. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    PubMed

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between

  3. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data

  4. Celestial Pole Offsets: Conversion From (dX, dY) to (d(psi), d(epsilon). Version 3

    DTIC Science & Technology

    2005-05-01

    observed angular offset of the celestial pole from its modelled position, expressed in terms of changes in ecliptic longitude and obliquity . These...the mean obliquity of the ecliptic of date (≈ J2000.0). As the celestial pole precesses farther from the ICRS Z-axis, two effects must be accounted for...to only a few significant digits. With dX ′ and dY ′ in hand we compute dψ = dX ′/ sin ² d² = dY ′ (8) where ² is the mean obliquity of the ecliptic

  5. Using XML/HTTP to Store, Serve and Annotate Tactical Scenarios for X3D Operational Visualization and Anti-Terrorist Training

    DTIC Science & Technology

    2003-03-01

    PXSLServlet Paul A. Open Source Relational x X 23 Tchistopolskii sql2dtd David Mertz Public domain Relational x -- sql2xml Scott Hathaway Public...March 2003. [Hunter 2001] Hunter, David ; Cagle, Kurt; Dix, Chris; Kovack, Roger; Pinnock, Jonathan, Rafter, Jeff; Beginning XML (2nd Edition...Postgraduate School Monterey, California 4. Curt Blais Naval Postgraduate School Monterey, California 5 Erik Chaum NAVSEA Undersea

  6. 3D display considerations for rugged airborne environments

    NASA Astrophysics Data System (ADS)

    Barnidge, Tracy J.; Tchon, Joseph L.

    2015-05-01

    The KC-46 is the next generation, multi-role, aerial refueling tanker aircraft being developed by Boeing for the United States Air Force. Rockwell Collins has developed the Remote Vision System (RVS) that supports aerial refueling operations under a variety of conditions. The system utilizes large-area, high-resolution 3D displays linked with remote sensors to enhance the operator's visual acuity for precise aerial refueling control. This paper reviews the design considerations, trade-offs, and other factors related to the selection and ruggedization of the 3D display technology for this military application.

  7. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  8. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  9. A 3D visualization of spatial relationship between geological structure and groundwater chemical profile around Iwate volcano, Japan: based on the ARCGIS 3D Analyst

    NASA Astrophysics Data System (ADS)

    Shibahara, A.; Ohwada, M.; Itoh, J.; Kazahaya, K.; Tsukamoto, H.; Takahashi, M.; Morikawa, N.; Takahashi, H.; Yasuhara, M.; Inamura, A.; Oyama, Y.

    2009-12-01

    We established 3D geological and hydrological model around Iwate volcano to visualize 3D relationships between subsurface structure and groundwater profile. Iwate volcano is a typical polygenetic volcano located in NE Japan, and its body is composed of two stratovolcanoes which have experienced sector collapses several times. Because of this complex structure, groundwater flow around Iwate volcano is strongly restricted by subsurface construction. For example, Kazahaya and Yasuhara (1999) clarified that shallow groundwater in north and east flanks of Iwate volcano are recharged at the mountaintop, and these flow systems are restricted in north and east area because of the structure of younger volcanic body collapse. In addition, Ohwada et al. (2006) found that these shallow groundwater in north and east flanks have relatively high concentration of major chemical components and high 3He/4He ratios. In this study, we succeeded to visualize the spatial relationship between subsurface structure and chemical profile of shallow and deep groundwater system using 3D model on the GIS. In the study region, a number of geological and hydrological datasets, such as boring log data and groundwater chemical profile, were reported. All these paper data are digitized and converted to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. We also inputted digital elevation model (DEM) around Iwate volcano issued by the Geographical Survey Institute of Japan, and digital geological maps issued by Geological Survey of Japan, AIST. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer.

  10. Strategies for Effectively Visualizing a 3D Flow Using Volume Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria; Grosch, Chester

    1997-01-01

    This paper discusses strategies for effectively portraying 3D flow using volume line integral convolution. Issues include defining an appropriate input texture, clarifying the distinct identities and relative depths of the advected texture elements, and selectively highlighting regions of interest in both the input and output volumes. Apart from offering insights into the greater potential of 3D LIC as a method for effectively representing flow in a volume, a principal contribution of this work is the suggestion of a technique for generating and rendering 3D visibility-impeding 'halos' that can help to intuitively indicate the presence of depth discontinuities between contiguous elements in a projection and thereby clarify the 3D spatial organization of elements in the flow. The proposed techniques are applied to the visualization of a hot, supersonic, laminar jet exiting into a colder, subsonic coflow.

  11. Comparison of 32 x 128 and 32 x 32 Geiger-mode APD FPAs for single photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Itzler, Mark A.; Entwistle, Mark; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir; Zalud, Peter F.; Senko, Tom; Tower, John; Ferraro, Joseph

    2011-05-01

    We present results obtained from 3D imaging focal plane arrays (FPAs) employing planar-geometry InGaAsP/InP Geiger-mode avalanche photodiodes (GmAPDs) with high-efficiency single photon sensitivity at 1.06 μm. We report results obtained for new 32 x 128 format FPAs with 50 μm pitch and compare these results to those obtained for 32 x 32 format FPAs with 100 μm pitch. We show excellent pixel-level yield-including 100% pixel operability-for both formats. The dark count rate (DCR) and photon detection efficiency (PDE) performance is found to be similar for both types of arrays, including the fundamental DCR vs. PDE tradeoff. The optical crosstalk due to photon emission induced by pixel-level avalanche detection events is found to be qualitatively similar for both formats, with some crosstalk metrics for the 32 x 128 format found to be moderately elevated relative to the 32 x 32 FPA results. Timing jitter measurements are also reported for the 32 x 128 FPAs.

  12. Improved visualization of intracranial vessels with intraoperative coregistration of rotational digital subtraction angiography and intraoperative 3D ultrasound.

    PubMed

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography.

  13. 3D Perception Technologies for Surgical Operating Theatres.

    PubMed

    Beyl, T; Schreiter, L; Nicolai, P; Raczkowsky, J; Wörn, H

    2016-01-01

    3D Perception technologies have been explored in various fields. This paper explores the application of such technologies for surgical operating theatres. Clinical applications can be found in workflow detection, tracking and analysis, collision avoidance with medical robots, perception of interaction between participants of the operation, training of the operation room crew, patient calibration and many more. In this paper a complete perception solution for the operating room is shown. The system is based on the ToF technology integrated to the Microsoft Kinect One implements a multi camera approach. Special emphasize is put on the tracking of the personnel and the evaluation of the system performance and accuracy.

  14. [Computerized monitoring system in the operating center with UNIX and X-window].

    PubMed

    Tanaka, Y; Hashimoto, S; Chihara, E; Kinoshita, T; Hirose, M; Nakagawa, M; Murakami, T

    1992-01-01

    We previously reported the fully automated data logging system in the operating center. Presently, we revised the system using a highly integrated operating system, UNIX instead of OS/9. With this multi-task and multi-window (X-window) system, we could monitor all 12 rooms in the operating center at a time. The system in the operating center consists of 2 computers, SONY NEWS1450 (UNIX workstation) and Sord M223 (CP/M, data logger). On the bitmapped display of the workstation, using X-window, the data of all the operating rooms can be visualized. Furthermore, 2 other minicomputers (Fujitsu A50 in the conference room, and A60 in the ICU) and a workstation (Sun3-80 in the ICU) were connected with ethernet. With the remote login function (NFS), we could easily obtain the data during the operation from outside the operating center. This system works automatically and needs no routine maintenance.

  15. An Integrated Web-Based 3d Modeling and Visualization Platform to Support Sustainable Cities

    NASA Astrophysics Data System (ADS)

    Amirebrahimi, S.; Rajabifard, A.

    2012-07-01

    Sustainable Development is found as the key solution to preserve the sustainability of cities in oppose to ongoing population growth and its negative impacts. This is complex and requires a holistic and multidisciplinary decision making. Variety of stakeholders with different backgrounds also needs to be considered and involved. Numerous web-based modeling and visualization tools have been designed and developed to support this process. There have been some success stories; however, majority failed to bring a comprehensive platform to support different aspects of sustainable development. In this work, in the context of SDI and Land Administration, CSDILA Platform - a 3D visualization and modeling platform -was proposed which can be used to model and visualize different dimensions to facilitate the achievement of sustainability, in particular, in urban context. The methodology involved the design of a generic framework for development of an analytical and visualization tool over the web. CSDILA Platform was then implemented via number of technologies based on the guidelines provided by the framework. The platform has a modular structure and uses Service-Oriented Architecture (SOA). It is capable of managing spatial objects in a 4D data store and can flexibly incorporate a variety of developed models using the platform's API. Development scenarios can be modeled and tested using the analysis and modeling component in the platform and the results are visualized in seamless 3D environment. The platform was further tested using number of scenarios and showed promising results and potentials to serve a wider need. In this paper, the design process of the generic framework, the implementation of CSDILA Platform and technologies used, and also findings and future research directions will be presented and discussed.

  16. NASA's Hubble Universe in 3-D

    NASA Image and Video Library

    2017-12-08

    This image depicts a vast canyon of dust and gas in the Orion Nebula from a 3-D computer model based on observations by NASA's Hubble Space Telescope and created by science visualization specialists at the Space Telescope Science Institute (STScI) in Baltimore, Md. A 3-D visualization of this model takes viewers on an amazing four-minute voyage through the 15-light-year-wide canyon. Credit: NASA, G. Bacon, L. Frattare, Z. Levay, and F. Summers (STScI/AURA) Go here to learn more about Hubble 3D: www.nasa.gov/topics/universe/features/hubble_imax_premier... or www.imax.com/hubble/ Take an exhilarating ride through the Orion Nebula, a vast star-making factory 1,500 light-years away. Swoop through Orion's giant canyon of gas and dust. Fly past behemoth stars whose brilliant light illuminates and energizes the entire cloudy region. Zoom by dusty tadpole-shaped objects that are fledgling solar systems. This virtual space journey isn't the latest video game but one of several groundbreaking astronomy visualizations created by specialists at the Space Telescope Science Institute (STScI) in Baltimore, the science operations center for NASA's Hubble Space Telescope. The cinematic space odysseys are part of the new Imax film "Hubble 3D," which opens today at select Imax theaters worldwide. The 43-minute movie chronicles the 20-year life of Hubble and includes highlights from the May 2009 servicing mission to the Earth-orbiting observatory, with footage taken by the astronauts. The giant-screen film showcases some of Hubble's breathtaking iconic pictures, such as the Eagle Nebula's "Pillars of Creation," as well as stunning views taken by the newly installed Wide Field Camera 3. While Hubble pictures of celestial objects are awe-inspiring, they are flat 2-D photographs. For this film, those 2-D images have been converted into 3-D environments, giving the audience the impression they are space travelers taking a tour of Hubble's most popular targets. "A large-format movie is a

  17. Modeling and Measurement of 3D Deformation of Scoliotic Spine Using 2D X-ray Images

    NASA Astrophysics Data System (ADS)

    Li, Hao; Leow, Wee Kheng; Huang, Chao-Hui; Howe, Tet Sen

    Scoliosis causes deformations such as twisting and lateral bending of the spine. To correct scoliotic deformation, the extents of 3D spinal deformation need to be measured. This paper studies the modeling and measurement of scoliotic spine based on 3D curve model. Through modeling the spine as a 3D Cosserat rod, the 3D structure of a scoliotic spine can be recovered by obtaining the minimum potential energy registration of the rod to the scoliotic spine in the x-ray image. Test results show that it is possible to obtain accurate 3D reconstruction using only the landmarks in a single view, provided that appropriate boundary conditions and elastic properties are included as constraints.

  18. 3D synchrotron x-ray microtomography of paint samples

    NASA Astrophysics Data System (ADS)

    Ferreira, Ester S. B.; Boon, Jaap J.; van der Horst, Jerre; Scherrer, Nadim C.; Marone, Federica; Stampanoni, Marco

    2009-07-01

    Synchrotron based X-ray microtomography is a novel way to examine paint samples. The three dimensional distribution of pigment particles, binding media and their deterioration products as well as other features such as voids, are made visible in their original context through a computing environment without the need of physical sectioning. This avoids manipulation related artefacts. Experiments on paint chips (approximately 500 micron wide) were done on the TOMCAT beam line (TOmographic Microscopy and Coherent rAdiology experimenTs) at the Paul Scherrer Institute in Villigen, CH, using an x-ray energy of up to 40 keV. The x-ray absorption images are obtained at a resolution of 350 nm. The 3D dataset was analysed using the commercial 3D imaging software Avizo 5.1. Through this process, virtual sections of the paint sample can be obtained in any orientation. One of the topics currently under research are the ground layers of paintings by Cuno Amiet (1868- 1961), one of the most important Swiss painters of classical modernism, whose early work is currently the focus of research at the Swiss Institute for Art Research (SIK-ISEA). This technique gives access to information such as sample surface morphology, porosity, particle size distribution and even particle identification. In the case of calcium carbonate grounds for example, features like microfossils present in natural chalks, can be reconstructed and their species identified, thus potentially providing information towards the mineral origin. One further elegant feature of this technique is that a target section can be selected within the 3D data set, before exposing it to obtain chemical data. Virtual sections can then be compared with cross sections of the same samples made in the traditional way.

  19. Applying 3D-printing technology in planning operations of cancer patients

    NASA Astrophysics Data System (ADS)

    Kashapov, L. N.; N, A. N. Rudyk A.; Kashapov, R. N.

    2014-12-01

    The purpose of this work was creation 3D model of the front part of the skull of the patient and evaluates the effectiveness of its use in the planning of the operation. To achieve this goal was chosen an operation to remove a tumor of the right eyelid, germinate in the zygomatic bone. 3D printing was performed at different peripheral devices using the method of layering creating physical objects by a digital 3D model as well as the recovery model of the skull with the entire right malar bone for fixation on her titanium frame to maintain the eyeball in a fixed state.

  20. Scoops3D: software to analyze 3D slope stability throughout a digital landscape

    USGS Publications Warehouse

    Reid, Mark E.; Christian, Sarah B.; Brien, Dianne L.; Henderson, Scott T.

    2015-01-01

    The computer program, Scoops3D, evaluates slope stability throughout a digital landscape represented by a digital elevation model (DEM). The program uses a three-dimensional (3D) method of columns approach to assess the stability of many (typically millions) potential landslides within a user-defined size range. For each potential landslide (or failure), Scoops3D assesses the stability of a rotational, spherical slip surface encompassing many DEM cells using a 3D version of either Bishop’s simplified method or the Ordinary (Fellenius) method of limit-equilibrium analysis. Scoops3D has several options for the user to systematically and efficiently search throughout an entire DEM, thereby incorporating the effects of complex surface topography. In a thorough search, each DEM cell is included in multiple potential failures, and Scoops3D records the lowest stability (factor of safety) for each DEM cell, as well as the size (volume or area) associated with each of these potential landslides. It also determines the least-stable potential failure for the entire DEM. The user has a variety of options for building a 3D domain, including layers or full 3D distributions of strength and pore-water pressures, simplistic earthquake loading, and unsaturated suction conditions. Results from Scoops3D can be readily incorporated into a geographic information system (GIS) or other visualization software. This manual includes information on the theoretical basis for the slope-stability analysis, requirements for constructing and searching a 3D domain, a detailed operational guide (including step-by-step instructions for using the graphical user interface [GUI] software, Scoops3D-i) and input/output file specifications, practical considerations for conducting an analysis, results of verification tests, and multiple examples illustrating the capabilities of Scoops3D. Easy-to-use software installation packages are available for the Windows or Macintosh operating systems; these packages

  1. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  2. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  3. Mapping detailed 3D information onto high resolution SAR signatures

    NASA Astrophysics Data System (ADS)

    Anglberger, H.; Speck, R.

    2017-05-01

    Due to challenges in the visual interpretation of radar signatures or in the subsequent information extraction, a fusion with other data sources can be beneficial. The most accurate basis for a fusion of any kind of remote sensing data is the mapping of the acquired 2D image space onto the true 3D geometry of the scenery. In the case of radar images this is a challenging task because the coordinate system is based on the measured range which causes ambiguous regions due to layover effects. This paper describes a method that accurately maps the detailed 3D information of a scene to the slantrange-based coordinate system of imaging radars. Due to this mapping all the contributing geometrical parts of one resolution cell can be determined in 3D space. The proposed method is highly efficient, because computationally expensive operations can be directly performed on graphics card hardware. The described approach builds a perfect basis for sophisticated methods to extract data from multiple complimentary sensors like from radar and optical images, especially because true 3D information from whole cities will be available in the near future. The performance of the developed methods will be demonstrated with high resolution radar data acquired by the space-borne SAR-sensor TerraSAR-X.

  4. In Situ 3D Coherent X-ray Diffraction Imaging of Shock Experiments: Possible?

    NASA Astrophysics Data System (ADS)

    Barber, John

    2011-03-01

    In traditional coherent X-ray diffraction imaging (CXDI), a 2D or quasi-2D object is illuminated by a beam of coherent X-rays to produce a diffraction pattern, which is then manipulated via a process known as iterative phase retrieval to reconstruct an image of the original 2D sample. Recently, there have been dramatic advances in methods for performing fully 3D CXDI of a sample from a single diffraction pattern [Raines et al, Nature 463 214-7 (2010)], and these methods have been used to image samples tens of microns in size using soft X-rays. In this work, I explore the theoretical possibility of applying 3D CXDI techniques to the in situ imaging of the interaction between a shock front and a polycrystal, a far more stringent problem. A delicate trade-off is required between photon energy, spot size, imaging resolution, and the dimensions of the experimental setup. In this talk, I will outline the experimental and computational requirements for performing such an experiment, and I will present images and movies from simulations of one such hypothetical experiment, including both the time-resolved X-ray diffraction patterns and the time-resolved sample imagery.

  5. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  6. Role of Interaction in Enhancing the Epistemic Utility of 3D Mathematical Visualizations

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2010-01-01

    Many epistemic activities, such as spatial reasoning, sense-making, problem solving, and learning, are information-based. In the context of epistemic activities involving mathematical information, learners often use interactive 3D mathematical visualizations (MVs). However, performing such activities is not always easy. Although it is generally…

  7. Localizing Protein in 3D Neural Stem Cell Culture: a Hybrid Visualization Methodology

    PubMed Central

    Fai, Stephen; Bennett, Steffany A.L.

    2010-01-01

    The importance of 3-dimensional (3D) topography in influencing neural stem and progenitor cell (NPC) phenotype is widely acknowledged yet challenging to study. When dissociated from embryonic or post-natal brain, single NPCs will proliferate in suspension to form neurospheres. Daughter cells within these cultures spontaneously adopt distinct developmental lineages (neurons, oligodendrocytes, and astrocytes) over the course of expansion despite being exposed to the same extracellular milieu. This progression recapitulates many of the stages observed over the course of neurogenesis and gliogenesis in post-natal brain and is often used to study basic NPC biology within a controlled environment. Assessing the full impact of 3D topography and cellular positioning within these cultures on NPC fate is, however, difficult. To localize target proteins and identify NPC lineages by immunocytochemistry, free-floating neurospheres must be plated on a substrate or serially sectioned. This processing is required to ensure equivalent cell permeabilization and antibody access throughout the sphere. As a result, 2D epifluorescent images of cryosections or confocal reconstructions of 3D Z-stacks can only provide spatial information about cell position within discrete physical or digital 3D slices and do not visualize cellular position in the intact sphere. Here, to reiterate the topography of the neurosphere culture and permit spatial analysis of protein expression throughout the entire culture, we present a protocol for isolation, expansion, and serial sectioning of post-natal hippocampal neurospheres suitable for epifluorescent or confocal immunodetection of target proteins. Connexin29 (Cx29) is analyzed as an example. Next, using a hybrid of graphic editing and 3D modelling softwares rigorously applied to maintain biological detail, we describe how to re-assemble the 3D structural positioning of these images and digitally map labelled cells within the complete neurosphere. This

  8. Facilitating role of 3D multimodal visualization and learning rehearsal in memory recall.

    PubMed

    Do, Phuong T; Moreland, John R

    2014-04-01

    The present study investigated the influence of 3D multimodal visualization and learning rehearsal on memory recall. Participants (N = 175 college students ranging from 21 to 25 years) were assigned to different training conditions and rehearsal processes to learn a list of 14 terms associated with construction of a wood-frame house. They then completed a memory test determining their cognitive ability to free recall the definitions of the 14 studied terms immediately after training and rehearsal. The audiovisual modality training condition was associated with the highest accuracy, and the visual- and auditory-modality conditions with lower accuracy rates. The no-training condition indicated little learning acquisition. A statistically significant increase in performance accuracy for the audiovisual condition as a function of rehearsal suggested the relative importance of rehearsal strategies in 3D observational learning. Findings revealed the potential application of integrating virtual reality and cognitive sciences to enhance learning and teaching effectiveness.

  9. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  10. Audio-Visual Perception of 3D Cinematography: An fMRI Study Using Condition-Based and Computation-Based Analyses

    PubMed Central

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli

  11. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    PubMed

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  12. Trans3D: a free tool for dynamical visualization of EEG activity transmission in the brain.

    PubMed

    Blinowski, Grzegorz; Kamiński, Maciej; Wawer, Dariusz

    2014-08-01

    The problem of functional connectivity in the brain is in the focus of attention nowadays, since it is crucial for understanding information processing in the brain. A large repertoire of measures of connectivity have been devised, some of them being capable of estimating time-varying directed connectivity. Hence, there is a need for a dedicated software tool for visualizing the propagation of electrical activity in the brain. To this aim, the Trans3D application was developed. It is an open access tool based on widely available libraries and supporting both Windows XP/Vista/7(™), Linux and Mac environments. Trans3D can create animations of activity propagation between electrodes/sensors, which can be placed by the user on the scalp/cortex of a 3D model of the head. Various interactive graphic functions for manipulating and visualizing components of the 3D model and input data are available. An application of the Trans3D tool has helped to elucidate the dynamics of the phenomena of information processing in motor and cognitive tasks, which otherwise would have been very difficult to observe. Trans3D is available at: http://www.eeg.pl/. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative

  14. Unimpeded permeation of water through biocidal graphene oxide sheets anchored on to 3D porous polyolefinic membranes

    NASA Astrophysics Data System (ADS)

    Mural, Prasanna Kumar S.; Jain, Shubham; Kumar, Sachin; Madras, Giridhar; Bose, Suryasarathi

    2016-04-01

    3D porous membranes were developed by etching one of the phases (here PEO, polyethylene oxide) from melt-mixed PE/PEO binary blends. Herein, we have systematically discussed the development of these membranes using X-ray micro-computed tomography. The 3D tomograms of the extruded strands and hot-pressed samples revealed a clear picture as to how the morphology develops and coarsens over a function of time during post-processing operations like compression molding. The coarsening of PE/PEO blends was traced using X-ray micro-computed tomography and scanning electron microscopy (SEM) of annealed blends at different times. It is now understood from X-ray micro-computed tomography that by the addition of a compatibilizer (here lightly maleated PE), a stable morphology can be visualized in 3D. In order to anchor biocidal graphene oxide sheets onto these 3D porous membranes, the PE membranes were chemically modified with acid/ethylene diamine treatment to anchor the GO sheets which were further confirmed by Fourier transform infrared spectroscopy (FTIR), X-ray photoelectron spectroscopy (XPS) and surface Raman mapping. The transport properties through the membrane clearly reveal unimpeded permeation of water which suggests that anchoring GO on to the membranes does not clog the pores. Antibacterial studies through the direct contact of bacteria with GO anchored PE membranes resulted in 99% of bacterial inactivation. The possible bacterial inactivation through physical disruption of the bacterial cell wall and/or reactive oxygen species (ROS) is discussed herein. Thus this study opens new avenues in designing polyolefin based antibacterial 3D porous membranes for water purification.3D porous membranes were developed by etching one of the phases (here PEO, polyethylene oxide) from melt-mixed PE/PEO binary blends. Herein, we have systematically discussed the development of these membranes using X-ray micro-computed tomography. The 3D tomograms of the extruded strands and

  15. The terminal velocity of volcanic particles with shape obtained from 3D X-ray microtomography

    NASA Astrophysics Data System (ADS)

    Dioguardi, Fabio; Mele, Daniela; Dellino, Pierfrancesco; Dürig, Tobias

    2017-01-01

    New experiments of falling volcanic particles were performed in order to define terminal velocity models applicable in a wide range of Reynolds number Re. Experiments were carried out with fluids of various viscosities and with particles that cover a wide range of size, density and shape. Particle shape, which strongly influences fluid drag, was measured in 3D by High-resolution X-ray microtomography, by which sphericity Φ3D and fractal dimension D3D were obtained. They are easier to measure and less operator dependent than the 2D shape parameters used in previous papers. Drag laws that make use of the new 3D parameters were obtained by fitting particle data to the experiments, and single-equation terminal velocity models were derived. They work well both at high and low Re (3 × 10- 2 < Re < 104), while earlier formulations made use of different equations at different ranges of Re. The new drag laws are well suited for the modelling of particle transportation both in the eruptive column, where coarse and fine particles are present, and also in the distal part of the umbrella region, where fine ash is involved in the large-scale domains of atmospheric circulation. A table of the typical values of Φ3D and D3D of particles from known plinian, subplinian and ash plume eruptions is presented. Graphs of terminal velocity as a function of grain size are finally proposed as tools to help volcanologists and atmosphere scientists to model particle transportation of explosive eruptions.

  16. AstroCloud: An Agile platform for data visualization and specific analyzes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Molina, F. Z.; Salgado, R.; Bergel, A.; Infante, A.

    2017-07-01

    Nowadays, astronomers commonly run their own tools, or distributed computational packages, for data analysis and then visualizing the results with generic applications. This chain of processes comes at high cost: (a) analyses are manually applied, they are therefore difficult to be automatized, and (b) data have to be serialized, thus increasing the cost of parsing and saving intermediary data. We are developing AstroCloud, an agile visualization multipurpose platform intended for specific analyses of astronomical images (https://astrocloudy.wordpress.com). This platform incorporates domain-specific languages which make it easily extensible. AstroCloud supports customized plug-ins, which translate into time reduction on data analysis. Moreover, it also supports 2D and 3D rendering, including interactive features in real time. AstroCloud is under development, we are currently implementing different choices for data reduction and physical analyzes.

  17. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.

    2009-04-01

    Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method Selma KADIOGLU Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr Anatolia has always been more the point of transit, a bridge between West and East. Anatolia has been a home for ideas moving from all directions. So it is that in the Roman and post-Roman periods the role of Anatolia in general and of Ancyra (the Roman name of Ankara) in particular was of the greatest importance. Now, the visible archaeological remains of Roman period in Ankara are Roman Bath, Gymnasium, the Temple of Augustus of Rome, Street, Theatre, City Defence-Wall. The Caesar Augustus, the first Roman Emperor, conquered Asia Minor in 25 BC. Then a marble temple was built in Ancyra, the administrative capital of province, today the capital of Turkish Republic, Ankara. This monument was consecrated to the Empreror and to the Goddess Rome. This temple is supposed to have built over an earlier temple dedicated to Kybele and Men between 25 -20 BC. After the death of the Augustus in 14AD, a copy of the text of "Res Gestae Divi Augusti" was inscribed on the interior of the pronaos in Latin, whereas a Greek translation is also present on an exterior wall of the cella. In the 5th century, it was converted in to a church by the Byzantines. The aim of this study is to determine old buried archaeological remains in the Augustus temple, Roman Bath and in the governorship agora in Ulus district. These remains were imaged with transparent three dimensional (3D) visualization of the ground penetrating radar (GPR) data. Parallel two dimensional (2D) GPR profile data were acquired in the study areas, and then a 3D data volume were built using parallel 2D GPR data. A simplified amplitude-colour range and appropriate opacity function were constructed and transparent 3D image were obtained to activate buried

  18. Evaluation of a breast software model for 2D and 3D X-ray imaging studies of the breast.

    PubMed

    Baneva, Yanka; Bliznakova, Kristina; Cockmartin, Lesley; Marinov, Stoyko; Buliev, Ivan; Mettivier, Giovanni; Bosmans, Hilde; Russo, Paolo; Marshall, Nicholas; Bliznakov, Zhivko

    2017-09-01

    In X-ray imaging, test objects reproducing breast anatomy characteristics are realized to optimize issues such as image processing or reconstruction, lesion detection performance, image quality and radiation induced detriment. Recently, a physical phantom with a structured background has been introduced for both 2D mammography and breast tomosynthesis. A software version of this phantom and a few related versions are now available and a comparison between these 3D software phantoms and the physical phantom will be presented. The software breast phantom simulates a semi-cylindrical container filled with spherical beads of different diameters. Four computational breast phantoms were generated with a dedicated software application and for two of these, physical phantoms are also available and they are used for the side by side comparison. Planar projections in mammography and tomosynthesis were simulated under identical incident air kerma conditions. Tomosynthesis slices were reconstructed with an in-house developed reconstruction software. In addition to a visual comparison, parameters like fractal dimension, power law exponent β and second order statistics (skewness, kurtosis) of planar projections and tomosynthesis reconstructed images were compared. Visually, an excellent agreement between simulated and real planar and tomosynthesis images is observed. The comparison shows also an overall very good agreement between parameters evaluated from simulated and experimental images. The computational breast phantoms showed a close match with their physical versions. The detailed mathematical analysis of the images confirms the agreement between real and simulated 2D mammography and tomosynthesis images. The software phantom is ready for optimization purpose and extrapolation of the phantom to other breast imaging techniques. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  19. Development of 3D ultrasound needle guidance for high-dose-rate interstitial brachytherapy of gynaecological cancers

    NASA Astrophysics Data System (ADS)

    Rodgers, J.; Tessier, D.; D'Souza, D.; Leung, E.; Hajdok, G.; Fenster, A.

    2016-04-01

    High-dose-rate (HDR) interstitial brachytherapy is often included in standard-of-care for gynaecological cancers. Needles are currently inserted through a perineal template without any standard real-time imaging modality to assist needle guidance, causing physicians to rely on pre-operative imaging, clinical examination, and experience. While two-dimensional (2D) ultrasound (US) is sometimes used for real-time guidance, visualization of needle placement and depth is difficult and subject to variability and inaccuracy in 2D images. The close proximity to critical organs, in particular the rectum and bladder, can lead to serious complications. We have developed a three-dimensional (3D) transrectal US system and are investigating its use for intra-operative visualization of needle positions used in HDR gynaecological brachytherapy. As a proof-of-concept, four patients were imaged with post-insertion 3D US and x-ray CT. Using software developed in our laboratory, manual rigid registration of the two modalities was performed based on the perineal template's vaginal cylinder. The needle tip and a second point along the needle path were identified for each needle visible in US. The difference between modalities in the needle trajectory and needle tip position was calculated for each identified needle. For the 60 needles placed, the mean trajectory difference was 3.23 +/- 1.65° across the 53 visible needle paths and the mean difference in needle tip position was 3.89 +/- 1.92 mm across the 48 visible needles tips. Based on the preliminary results, 3D transrectal US shows potential for the development of a 3D US-based needle guidance system for interstitial gynaecological brachytherapy.

  20. 3D-printed coded apertures for x-ray backscatter radiography

    NASA Astrophysics Data System (ADS)

    Muñoz, André A. M.; Vella, Anna; Healy, Matthew J. F.; Lane, David W.; Jupp, Ian; Lockley, David

    2017-09-01

    Many different mask patterns can be used for X-ray backscatter imaging using coded apertures, which can find application in the medical, industrial and security sectors. While some of these patterns may be considered to have a self-supporting structure, this is not the case for some of the most frequently used patterns such as uniformly redundant arrays or any pattern with a high open fraction. This makes mask construction difficult and usually requires a compromise in its design by drilling holes or adopting a no two holes touching version of the original pattern. In this study, this compromise was avoided by 3D printing a support structure that was then filled with a radiopaque material to create the completed mask. The coded masks were manufactured using two different methods, hot cast and cold cast. Hot casting involved casting a bismuth alloy at 80°C into the 3D printed acrylonitrile butadiene styrene mould which produced an absorber with density of 8.6 g cm-3. Cold casting was undertaken at room temperature, when a tungsten/epoxy composite was cast into a 3D printed polylactic acid mould. The cold cast procedure offered a greater density of around 9.6 to 10 g cm-3 and consequently greater X-ray attenuation. It was also found to be much easier to manufacture and more cost effective. A critical review of the manufacturing procedure is presented along with some typical images. In both cases the 3D printing process allowed square apertures to be created avoiding their approximation by circular holes when conventional drilling is used.

  1. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  2. Hybridized orbital states in spin-orbit coupled 3 d -5 d double perovskites studied by x-ray absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Lee, Min-Cheol; Lee, Sanghyun; Won, C. J.; Lee, K. D.; Hur, N.; Chen, Jeng-Lung; Cho, Deok-Yong; Noh, T. W.

    2018-03-01

    We investigated the orbital hybridization mechanism in 3 d -5 d double perovskites (DPs) of La2CoIrO6 and La2CoPtO6 using x-ray absorption spectroscopy. It is clearly evidenced by O K -edge and Co K -edge x-ray absorption spectra that the Co 3 d orbitals hybridize not only with the half-filled Ir/Pt jeff states but also with the fully empty (unpolarized) Ir/Pt eg states in both DPs. The Co 3 d eg-Ir 5 d eg hybridization cannot contribute to the ferrimagnetic long-range order in La2CoIrO6 established by spin-selective Co 3 d t2 g-Ir 5 d jeff hybridization through the intermediate oxygen p state but could serve as an origin of paramagnetism. The strengths of such orbital hybridizations were found to be almost invariant to temperature, even far above the Curie temperature, implying persistent paramagnetism against the antiferromagnetic ordering in the spin-orbit entangled 3 d -5 d DPs.

  3. Automatic Localization of Vertebral Levels in X-Ray Fluoroscopy Using 3D-2D Registration: A Tool to Reduce Wrong-Site Surgery

    PubMed Central

    Otake, Y.; Schafer, S.; Stayman, J. W.; Zbijewski, W.; Kleinszig, G.; Graumann, R.; Khanna, A. J.; Siewerdsen, J. H.

    2012-01-01

    Surgical targeting of the incorrect vertebral level (“wrong-level” surgery) is among the more common wrong-site surgical errors, attributed primarily to a lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. Conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error, and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (viz., CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved 10 patient CT datasets from which 50,000 simulated fluoroscopic images were generated from C-arm poses selected to approximate C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (viz., mPD < 5mm). Simulation studies showed a success rate of 99.998% (1 failure in 50,000 trials) and computation time of 4.7 sec on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific

  4. FaceWarehouse: a 3D facial expression database for visual computing.

    PubMed

    Cao, Chen; Weng, Yanlin; Zhou, Shun; Tong, Yiying; Zhou, Kun

    2014-03-01

    We present FaceWarehouse, a database of 3D facial expressions for visual computing applications. We use Kinect, an off-the-shelf RGBD camera, to capture 150 individuals aged 7-80 from various ethnic backgrounds. For each person, we captured the RGBD data of her different expressions, including the neutral expression and 19 other expressions such as mouth-opening, smile, kiss, etc. For every RGBD raw data record, a set of facial feature points on the color image such as eye corners, mouth contour, and the nose tip are automatically localized, and manually adjusted if better accuracy is required. We then deform a template facial mesh to fit the depth data as closely as possible while matching the feature points on the color image to their corresponding points on the mesh. Starting from these fitted face meshes, we construct a set of individual-specific expression blendshapes for each person. These meshes with consistent topology are assembled as a rank-3 tensor to build a bilinear face model with two attributes: identity and expression. Compared with previous 3D facial databases, for every person in our database, there is a much richer matching collection of expressions, enabling depiction of most human facial actions. We demonstrate the potential of FaceWarehouse for visual computing with four applications: facial image manipulation, face component transfer, real-time performance-based facial image animation, and facial animation retargeting from video to image.

  5. 3D topology of orientation columns in visual cortex revealed by functional optical coherence tomography.

    PubMed

    Nakamichi, Yu; Kalatsky, Valery A; Watanabe, Hideyuki; Sato, Takayuki; Rajagopalan, Uma Maheswari; Tanifuji, Manabu

    2018-04-01

    Orientation tuning is a canonical neuronal response property of six-layer visual cortex that is encoded in pinwheel structures with center orientation singularities. Optical imaging of intrinsic signals enables us to map these surface two-dimensional (2D) structures, whereas lack of appropriate techniques has not allowed us to visualize depth structures of orientation coding. In the present study, we performed functional optical coherence tomography (fOCT), a technique capable of acquiring a 3D map of the intrinsic signals, to study the topology of orientation coding inside the cat visual cortex. With this technique, for the first time, we visualized columnar assemblies in orientation coding that had been predicted from electrophysiological recordings. In addition, we found that the columnar structures were largely distorted around pinwheel centers: center singularities were not rigid straight lines running perpendicularly to the cortical surface but formed twisted string-like structures inside the cortex that turned and extended horizontally through the cortex. Looping singularities were observed with their respective termini accessing the same cortical surface via clockwise and counterclockwise orientation pinwheels. These results suggest that a 3D topology of orientation coding cannot be fully anticipated from 2D surface measurements. Moreover, the findings demonstrate the utility of fOCT as an in vivo mesoscale imaging method for mapping functional response properties of cortex in the depth axis. NEW & NOTEWORTHY We used functional optical coherence tomography (fOCT) to visualize three-dimensional structure of the orientation columns with millimeter range and micrometer spatial resolution. We validated vertically elongated columnar structure in iso-orientation domains. The columnar structure was distorted around pinwheel centers. An orientation singularity formed a string with tortuous trajectories inside the cortex and connected clockwise and counterclockwise

  6. Visualization of the variability of 3D statistical shape models by animation.

    PubMed

    Lamecker, Hans; Seebass, Martin; Lange, Thomas; Hege, Hans-Christian; Deuflhard, Peter

    2004-01-01

    Models of the 3D shape of anatomical objects and the knowledge about their statistical variability are of great benefit in many computer assisted medical applications like images analysis, therapy or surgery planning. Statistical model of shapes have successfully been applied to automate the task of image segmentation. The generation of 3D statistical shape models requires the identification of corresponding points on two shapes. This remains a difficult problem, especially for shapes of complicated topology. In order to interpret and validate variations encoded in a statistical shape model, visual inspection is of great importance. This work describes the generation and interpretation of statistical shape models of the liver and the pelvic bone.

  7. Extremely Low Operating Current Resistive Memory Based on Exfoliated 2D Perovskite Single Crystals for Neuromorphic Computing.

    PubMed

    Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling

    2017-12-26

    Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.

  8. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  9. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  10. A fast rigid-registration method of inferior limb X-ray image and 3D CT images for TKA surgery

    NASA Astrophysics Data System (ADS)

    Ito, Fumihito; O. D. A, Prima; Uwano, Ikuko; Ito, Kenzo

    2010-03-01

    In this paper, we propose a fast rigid-registration method of inferior limb X-ray films (two-dimensional Computed Radiography (CR) images) and three-dimensional Computed Tomography (CT) images for Total Knee Arthroplasty (TKA) surgery planning. The position of the each bone, such as femur and tibia (shin bone), in X-ray film and 3D CT images is slightly different, and we must pay attention how to use the two different images, since X-ray film image is captured in the standing position, and 3D CT is captured in decubitus (face up) position, respectively. Though the conventional registration mainly uses cross-correlation function between two images,and utilizes optimization techniques, it takes enormous calculation time and it is difficult to use it in interactive operations. In order to solve these problems, we calculate the center line (bone axis) of femur and tibia (shin bone) automatically, and we use them as initial positions for the registration. We evaluate our registration method by using three patient's image data, and we compare our proposed method and a conventional registration, which uses down-hill simplex algorithm. The down-hill simplex method is an optimization algorithm that requires only function evaluations, and doesn't need the calculation of derivatives. Our registration method is more effective than the downhill simplex method in computational time and the stable convergence. We have developed the implant simulation system on a personal computer, in order to support the surgeon in a preoperative planning of TKA. Our registration method is implemented in the simulation system, and user can manipulate 2D/3D translucent templates of implant components on X-ray film and 3D CT images.

  11. Calcification detection of abdominal aorta in CT images and 3D visualization in VR devices.

    PubMed

    Garcia-Berna, Jose A; Sanchez-Gomez, Juan M; Hermanns, Judith; Garcia-Mateos, Gines; Fernandez-Aleman, Jose L

    2016-08-01

    Automatic calcification detection in abdominal aorta consists of a set of computer vision techniques to quantify the amount of calcium that is found around this artery. Knowing that information, it is possible to perform statistical studies that relate vascular diseases with the presence of calcium in these structures. To facilitate the detection in CT images, a contrast is usually injected into the circulatory system of the patients to distinguish the aorta from other body tissues and organs. This contrast increases the absorption of X-rays by human blood, making it easier the measurement of calcifications. Based on this idea, a new system capable of detecting and tracking the aorta artery has been developed with an estimation of the calcium found surrounding the aorta. Besides, the system is complemented with a 3D visualization mode of the image set which is designed for the new generation of immersive VR devices.

  12. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    PubMed

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  13. 3D elemental sensitive imaging using transmission X-ray microscopy.

    PubMed

    Liu, Yijin; Meirer, Florian; Wang, Junyue; Requena, Guillermo; Williams, Phillip; Nelson, Johanna; Mehta, Apurva; Andrews, Joy C; Pianetta, Piero

    2012-09-01

    Determination of the heterogeneous distribution of metals in alloy/battery/catalyst and biological materials is critical to fully characterize and/or evaluate the functionality of the materials. Using synchrotron-based transmission x-ray microscopy (TXM), it is now feasible to perform nanoscale-resolution imaging over a wide X-ray energy range covering the absorption edges of many elements; combining elemental sensitive imaging with determination of sample morphology. We present an efficient and reliable methodology to perform 3D elemental sensitive imaging with excellent sample penetration (tens of microns) using hard X-ray TXM. A sample of an Al-Si piston alloy is used to demonstrate the capability of the proposed method.

  14. 3D Visualization of Urban Area Using Lidar Technology and CityGML

    NASA Astrophysics Data System (ADS)

    Popovic, Dragana; Govedarica, Miro; Jovanovic, Dusan; Radulovic, Aleksandra; Simeunovic, Vlado

    2017-12-01

    3D models of urban areas have found use in modern world such as navigation, cartography, urban planning visualization, construction, tourism and even in new applications of mobile navigations. With the advancement of technology there are much better solutions for mapping earth’s surface and spatial objects. 3D city model enables exploration, analysis, management tasks and presentation of a city. Urban areas consist of terrain surfaces, buildings, vegetation and other parts of city infrastructure such as city furniture. Nowadays there are a lot of different methods for collecting, processing and publishing 3D models of area of interest. LIDAR technology is one of the most effective methods for collecting data due the large amount data that can be obtained with high density and geometrical accuracy. CityGML is open standard data model for storing alphanumeric and geometry attributes of city. There are 5 levels of display (LoD0, LoD1, LoD2, LoD3, LoD4). In this study, main aim is to represent part of urban area of Novi Sad using LIDAR technology, for data collecting, and different methods for extraction of information’s using CityGML as a standard for 3D representation. By using series of programs, it is possible to process collected data, transform it to CityGML and store it in spatial database. Final product is CityGML 3D model which can display textures and colours in order to give a better insight of the cities. This paper shows results of the first three levels of display. They consist of digital terrain model and buildings with differentiated rooftops and differentiated boundary surfaces. Complete model gives us a realistic view of 3D objects.

  15. Minimally invasive fixation in tibial plateau fractures using an pre-operative and intra-operative real size 3D printing.

    PubMed

    Giannetti, Silvio; Bizzotto, Nicola; Stancati, Andrea; Santucci, Attilio

    2017-03-01

    The purpose of our study was to compare the outcome after minimally invasive reconstruction and internal fixation with and without the use of pre- and intra-operative real size 3D printing for patients with displaced tibial plateau fractures (TPFs). We prospectively followed up 40 consecutive adult patients with closed TPF who underwent surgical treatment of reconstruction of the tibial plateau with the use of minimally invasive fixation. Sixteen patients (group 1) were operated using a pre-operative and intra-operative real size 3D-model, while 24 patients (group 2) were operated without 3D-model printing, but using only pre-operative and intra-operative 3D Tc-scan images. The mean operating time was 148.2±15.9min for group 1 and 174.5±22.2min for group 2 (p=0.041). In addition, the mean intraoperative blood loss was less in group 1 (520mL) than in group 2 (546mL) (p=0.534). After discharge, all patients were followed up at 6 weeks, 12 weeks, 6 months, 1year and then every year post surgically and radiographic evaluation was carried out each time using clinical and radiological Rasmussen's score, with no significant differences between the two groups. Two patients (group 2) developed infection which resolved within 3 weeks after usage of antibiotics. Neither superficial nor deep infections were present in group 1. In all patients, no non-union occurred. No intraoperative, perioperative, or postoperative complications, such as loss of valgus correction, bone fractures, or metallic plate failures were detected at follow-up. In patients operated with the use of 3D-model printing, we found a significant reduction in surgical time. Moreover, the technique without a 3D-model increased the patient's and the surgeon's exposure to radiation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. SAVA 3: A testbed for integration and control of visual processes

    NASA Technical Reports Server (NTRS)

    Crowley, James L.; Christensen, Henrik

    1994-01-01

    The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.

  17. Geospatial Data Processing for 3d City Model Generation, Management and Visualization

    NASA Astrophysics Data System (ADS)

    Toschi, I.; Nocerino, E.; Remondino, F.; Revolti, A.; Soria, G.; Piffer, S.

    2017-05-01

    Recent developments of 3D technologies and tools have increased availability and relevance of 3D data (from 3D points to complete city models) in the geospatial and geo-information domains. Nevertheless, the potential of 3D data is still underexploited and mainly confined to visualization purposes. Therefore, the major challenge today is to create automatic procedures that make best use of available technologies and data for the benefits and needs of public administrations (PA) and national mapping agencies (NMA) involved in "smart city" applications. The paper aims to demonstrate a step forward in this process by presenting the results of the SENECA project (Smart and SustaiNablE City from Above - http://seneca.fbk.eu). State-of-the-art processing solutions are investigated in order to (i) efficiently exploit the photogrammetric workflow (aerial triangulation and dense image matching), (ii) derive topologically and geometrically accurate 3D geo-objects (i.e. building models) at various levels of detail and (iii) link geometries with non-spatial information within a 3D geo-database management system accessible via web-based client. The developed methodology is tested on two case studies, i.e. the cities of Trento (Italy) and Graz (Austria). Both spatial (i.e. nadir and oblique imagery) and non-spatial (i.e. cadastral information and building energy consumptions) data are collected and used as input for the project workflow, starting from 3D geometry capture and modelling in urban scenarios to geometry enrichment and management within a dedicated webGIS platform.

  18. A workflow for the 3D visualization of meteorological data

    NASA Astrophysics Data System (ADS)

    Helbig, Carolin; Rink, Karsten

    2014-05-01

    In the future, climate change will strongly influence our environment and living conditions. To predict possible changes, climate models that include basic and process conditions have been developed and big data sets are produced as a result of simulations. The combination of various variables of climate models with spatial data from different sources helps to identify correlations and to study key processes. For our case study we use results of the weather research and forecasting (WRF) model of two regions at different scales that include various landscapes in Northern Central Europe and Baden-Württemberg. We visualize these simulation results in combination with observation data and geographic data, such as river networks, to evaluate processes and analyze if the model represents the atmospheric system sufficiently. For this purpose, a continuous workflow that leads from the integration of heterogeneous raw data to visualization using open source software (e.g. OpenGeoSys Data Explorer, ParaView) is developed. These visualizations can be displayed on a desktop computer or in an interactive virtual reality environment. We established a concept that includes recommended 3D representations and a color scheme for the variables of the data based on existing guidelines and established traditions in the specific domain. To examine changes over time in observation and simulation data, we added the temporal dimension to the visualization. In a first step of the analysis, the visualizations are used to get an overview of the data and detect areas of interest such as regions of convection or wind turbulences. Then, subsets of data sets are extracted and the included variables can be examined in detail. An evaluation by experts from the domains of visualization and atmospheric sciences establish if they are self-explanatory and clearly arranged. These easy-to-understand visualizations of complex data sets are the basis for scientific communication. In addition, they have

  19. 3D Orbit Visualization for Earth-Observing Missions

    NASA Technical Reports Server (NTRS)

    Jacob, Joseph C.; Plesea, Lucian; Chafin, Brian G.; Weiss, Barry H.

    2011-01-01

    This software visualizes orbit paths for the Orbiting Carbon Observatory (OCO), but was designed to be general and applicable to any Earth-observing mission. The software uses the Google Earth user interface to provide a visual mechanism to explore spacecraft orbit paths, ground footprint locations, and local cloud cover conditions. In addition, a drill-down capability allows for users to point and click on a particular observation frame to pop up ancillary information such as data product filenames and directory paths, latitude, longitude, time stamp, column-average dry air mole fraction of carbon dioxide, and solar zenith angle. This software can be integrated with the ground data system for any Earth-observing mission to automatically generate daily orbit path data products in Google Earth KML format. These KML data products can be directly loaded into the Google Earth application for interactive 3D visualization of the orbit paths for each mission day. Each time the application runs, the daily orbit paths are encapsulated in a KML file for each mission day since the last time the application ran. Alternatively, the daily KML for a specified mission day may be generated. The application automatically extracts the spacecraft position and ground footprint geometry as a function of time from a daily Level 1B data product created and archived by the mission s ground data system software. In addition, ancillary data, such as the column-averaged dry air mole fraction of carbon dioxide and solar zenith angle, are automatically extracted from a Level 2 mission data product. Zoom, pan, and rotate capability are provided through the standard Google Earth interface. Cloud cover is indicated with an image layer from the MODIS (Moderate Resolution Imaging Spectroradiometer) aboard the Aqua satellite, which is automatically retrieved from JPL s OnEarth Web service.

  20. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  1. MorphoGraphX: A platform for quantifying morphogenesis in 4D.

    PubMed

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne H K; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-05-06

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX ( www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth.

  2. HipMatch: an object-oriented cross-platform program for accurate determination of cup orientation using 2D-3D registration of single standard X-ray radiograph and a CT volume.

    PubMed

    Zheng, Guoyan; Zhang, Xuan; Steppacher, Simon D; Murphy, Stephen B; Siebenrock, Klaus A; Tannast, Moritz

    2009-09-01

    The widely used procedure of evaluation of cup orientation following total hip arthroplasty using single standard anteroposterior (AP) radiograph is known inaccurate, largely due to the wide variability in individual pelvic orientation relative to X-ray plate. 2D-3D image registration methods have been introduced for an accurate determination of the post-operative cup alignment with respect to an anatomical reference extracted from the CT data. Although encouraging results have been reported, their extensive usage in clinical routine is still limited. This may be explained by their requirement of a CAD model of the prosthesis, which is often difficult to be organized from the manufacturer due to the proprietary issue, and by their requirement of either multiple radiographs or a radiograph-specific calibration, both of which are not available for most retrospective studies. To address these issues, we developed and validated an object-oriented cross-platform program called "HipMatch" where a hybrid 2D-3D registration scheme combining an iterative landmark-to-ray registration with a 2D-3D intensity-based registration was implemented to estimate a rigid transformation between a pre-operative CT volume and the post-operative X-ray radiograph for a precise estimation of cup alignment. No CAD model of the prosthesis is required. Quantitative and qualitative results evaluated on cadaveric and clinical datasets are given, which indicate the robustness and the accuracy of the program. HipMatch is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway), VTK, and Coin3D and is transportable to any platform.

  3. Observation of beta and X rays with 3-D-architecture silicon microstrip sensors

    NASA Astrophysics Data System (ADS)

    Kenney, C. J.; Parker, S. I.; Krieger, B.; Ludewigt, B.; Dubbs, T. P.; Sadrozinski, H.

    2001-04-01

    The first silicon radiation sensors based on the three-dimensional (3-D) architecture have been successfully fabricated. X-ray spectra from iron-55 and americium-241 have been recorded by reading out a 3-D architecture detector via wire bonds to a low-noise, charge-sensitive preamplifier. Using a beta source, coincidences between a 3-D sensor and a plastic scintillator were observed. This is the first observation of ionizing radiation using a silicon sensor based on the 3-D architecture. Details of the apparatus and measurements are described.

  4. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    PubMed

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  5. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  6. Sandia MEMS Visualization Tools v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.

    This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less

  7. Thoracoscopic anatomical lung segmentectomy using 3D computed tomography simulation without tumour markings for non-palpable and non-visualized small lung nodules.

    PubMed

    Kato, Hirohisa; Oizumi, Hiroyuki; Suzuki, Jun; Hamada, Akira; Watarai, Hikaru; Sadahiro, Mitsuaki

    2017-09-01

    Although wedge resection can be curative for small lung tumours, tumour marking is sometimes required for resection of non-palpable or visually undetectable lung nodules as a method for identification of tumours. Tumour marking sometimes fails and occasionally causes serious complications. We have performed many thoracoscopic segmentectomies using 3D computed tomography simulation for undetectable small lung tumours without any tumour markings. The aim of this study was to investigate whether thoracoscopic segmentectomy planned with 3D computed tomography simulation could precisely remove non-palpable and visually undetectable tumours. Between January 2012 and March 2016, 58 patients underwent thoracoscopic segmentectomy using 3D computed tomography simulation for non-palpable, visually undetectable tumours. Surgical outcomes were evaluated. A total of 35, 14 and 9 patients underwent segmentectomy, subsegmentectomy and segmentectomy combined with adjacent subsegmentectomy, respectively. All tumours were correctly resected without tumour marking. The median tumour size and distance from the visceral pleura was 14 ± 5.2 mm (range 5-27 mm) and 11.6 mm (range 1-38.8 mm), respectively. Median values related to the procedures were operative time, 176 min (range 83-370 min); blood loss, 43 ml (range 0-419 ml); duration of chest tube placement, 1 day (range 1-8 days); and postoperative hospital stay, 5 days (range 3-12 days). Two cases were converted to open thoracotomy due to bleeding. Three cases required pleurodesis for pleural fistula. No recurrences occurred during the mean follow-up period of 44.4 months (range 5-53 months). Thoracoscopic segmentectomy using 3D computed tomography simulation was feasible and could be performed to resect undetectable tumours with no tumour markings. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  8. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    PubMed

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  9. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of

  10. X-ray Fluorescence Core Scanning of Oman Drilling Project Holes BT1B and GT3A Cores on D/V CHIKYU

    NASA Astrophysics Data System (ADS)

    Johnson, K. T. M.; Kelemen, P. B.; Michibayashi, K.; Greenberger, R. N.; Koepke, J.; Beinlich, A.; Morishita, T.; Jesus, A. P. M.; Lefay, R.

    2017-12-01

    The JEOL JSX-3600CA1 energy dispersive X-ray fluorescence core logger (XRF-CL) on the D/V Chikyu provides quantitative element concentrations of scanned cores. Scans of selected intervals are made on an x-y grid with point spacing of 5 mm. Element concentrations for Si, Al, Ti, Ca, Mg, Mn, Fe, Na, K, Cr, Ni, S and Zn are collected for each point on the grid. Accuracy of element concentrations provided by the instrument software is improved by applying empirical correction algorithms. Element concentrations were collected for 9,289 points from twenty-seven core intervals in Hole BT1B (basal thrust) and for 6,389 points from forty core intervals in Hole GT3A (sheeted dike-gabbro transition) of the Oman Drilling Project on the D/V Chikyu XRF-CL during Leg 2 of the Oman Drilling Project in August-September, 2017. The geochemical data are used for evaluating downhole compositional details associated with lithological changes, unit contacts and mineralogical variations and are particularly informative when plotted as concentration contour maps or downhole concentration diagrams. On Leg 2 additional core scans were made with X-ray Computed Tomography (X-ray CT) and infrared images from the visible-shortwave infrared imaging spectroscopy (IR) systems on board. XRF-CL, X-ray CT and IR imaging plots used together provide detailed information on rock compositions, textures and mineralogy that assist naked eye visual observations. Examples of some uses of XRF-CL geochemical maps and downhole data are shown. XRF-CL and IR scans of listvenite clearly show zones of magnesite, dolomite and the Cr-rich mica, fuchsite that are subdued in visual observation, and these scans can be used to calculate variations in proportions of these minerals in Hole BT1B cores. In Hole GT3A XRF-CL data can be used to distinguish compositional changes in different generations of sheeted dikes and gabbros and when combined with visual observations of intrusive relationships the detailed geochemical

  11. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    ERIC Educational Resources Information Center

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  12. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE PAGES

    Willey, T. M.; Champley, K.; Hodgin, R.; ...

    2016-06-17

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  13. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    NASA Astrophysics Data System (ADS)

    Willey, T. M.; Champley, K.; Hodgin, R.; Lauderbach, L.; Bagge-Hansen, M.; May, C.; Sanchez, N.; Jensen, B. J.; Iverson, A.; van Buuren, T.

    2016-06-01

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ˜80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.

  14. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willey, T. M.; Champley, K.; Hodgin, R.

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. The work described here outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ~80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst,more » the 2 nd images the flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3 rd frame captures the flyer in flight, while the 4 th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  15. Unimpeded permeation of water through biocidal graphene oxide sheets anchored on to 3D porous polyolefinic membranes.

    PubMed

    Mural, Prasanna Kumar S; Jain, Shubham; Kumar, Sachin; Madras, Giridhar; Bose, Suryasarathi

    2016-04-21

    3D porous membranes were developed by etching one of the phases (here PEO, polyethylene oxide) from melt-mixed PE/PEO binary blends. Herein, we have systematically discussed the development of these membranes using X-ray micro-computed tomography. The 3D tomograms of the extruded strands and hot-pressed samples revealed a clear picture as to how the morphology develops and coarsens over a function of time during post-processing operations like compression molding. The coarsening of PE/PEO blends was traced using X-ray micro-computed tomography and scanning electron microscopy (SEM) of annealed blends at different times. It is now understood from X-ray micro-computed tomography that by the addition of a compatibilizer (here lightly maleated PE), a stable morphology can be visualized in 3D. In order to anchor biocidal graphene oxide sheets onto these 3D porous membranes, the PE membranes were chemically modified with acid/ethylene diamine treatment to anchor the GO sheets which were further confirmed by Fourier transform infrared spectroscopy (FTIR), X-ray photoelectron spectroscopy (XPS) and surface Raman mapping. The transport properties through the membrane clearly reveal unimpeded permeation of water which suggests that anchoring GO on to the membranes does not clog the pores. Antibacterial studies through the direct contact of bacteria with GO anchored PE membranes resulted in 99% of bacterial inactivation. The possible bacterial inactivation through physical disruption of the bacterial cell wall and/or reactive oxygen species (ROS) is discussed herein. Thus this study opens new avenues in designing polyolefin based antibacterial 3D porous membranes for water purification.

  16. Principle and engineering implementation of 3D visual representation and indexing of medical diagnostic records (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Shi, Liehang; Sun, Jianyong; Yang, Yuanyuan; Ling, Tonghui; Wang, Mingqing; Zhang, Jianguo

    2017-03-01

    Purpose: Due to the generation of a large number of electronic imaging diagnostic records (IDR) year after year in a digital hospital, The IDR has become the main component of medical big data which brings huge values to healthcare services, professionals and administration. But a large volume of IDR presented in a hospital also brings new challenges to healthcare professionals and services as there may be too many IDRs for each patient so that it is difficult for a doctor to review all IDR of each patient in a limited appointed time slot. In this presentation, we presented an innovation method which uses an anatomical 3D structure object visually to represent and index historical medical status of each patient, which is called Visual Patient (VP) in this presentation, based on long term archived electronic IDR in a hospital, so that a doctor can quickly learn the historical medical status of the patient, quickly point and retrieve the IDR he or she interested in a limited appointed time slot. Method: The engineering implementation of VP was to build 3D Visual Representation and Index system called VP system (VPS) including components of natural language processing (NLP) for Chinese, Visual Index Creator (VIC), and 3D Visual Rendering Engine.There were three steps in this implementation: (1) an XML-based electronic anatomic structure of human body for each patient was created and used visually to index the all of abstract information of each IDR for each patient; (2)a number of specific designed IDR parsing processors were developed and used to extract various kinds of abstract information of IDRs retrieved from hospital information systems; (3) a 3D anatomic rendering object was introduced visually to represent and display the content of VIO for each patient. Results: The VPS was implemented in a simulated clinical environment including PACS/RIS to show VP instance to doctors. We setup two evaluation scenario in a hospital radiology department to evaluate whether

  17. External and internal macromorphology in 3D-reconstructed maxillary molars using computerized X-ray microtomography.

    PubMed

    Bjørndal, L; Carlsen, O; Thuesen, G; Darvann, T; Kreiborg, S

    1999-01-01

    The aim of this study was to perform a qualitative analysis of the relationship between the external and internal macromorphology of the root complex and to use fractal dimension analysis to determine the correlation between the shape of the outer surface of the root and the shape of the root canal. On the basis of X-ray computed transaxial microtomography, a qualitative and quantitative analysis of the external and internal macromorphology of the root complex in permanent maxillary molars was performed using well-defined macromorphological variables and fractal dimension analysis. Five maxillary molars were placed between a microfocus X-ray tube with a focal spot size of 0.07 mm, a Thomson-SCF image intensifier, and a CCD camera compromising a detector for the tomograph. Between 100 and 240 tomographic 2D slices were made of each tooth. Assembling slices for 3D volume was carried out with subsequent median noise filtering. Segmentation into enamel, dentine and pulp space was achieved through thresholding followed by morphological filtering. Surface representations were then constructed. A useful visualization of the tooth was created by making the dental hard tissues transparent and the pulp chamber and root-canal system opaque. On this basis it became possible to assess the relationship between the external and internal macromorphology of the crown and root complex. There was strong agreement between the number, position and cross-section of the root canals and the number, position and degree of manifestation of the root complex macrostructures. Data from a fractal dimension analysis also showed a high correlation between the shape of the root canals and the corresponding roots. It is suggested that these types of 3D volumes constitute a platform for preclinical training in fundamental endodontic procedures.

  18. e23D: database and visualization of A-to-I RNA editing sites mapped to 3D protein structures.

    PubMed

    Solomon, Oz; Eyal, Eran; Amariglio, Ninette; Unger, Ron; Rechavi, Gidi

    2016-07-15

    e23D, a database of A-to-I RNA editing sites from human, mouse and fly mapped to evolutionary related protein 3D structures, is presented. Genomic coordinates of A-to-I RNA editing sites are converted to protein coordinates and mapped onto 3D structures from PDB or theoretical models from ModBase. e23D allows visualization of the protein structure, modeling of recoding events and orientation of the editing with respect to nearby genomic functional sites from databases of disease causing mutations and genomic polymorphism. http://www.sheba-cancer.org.il/e23D CONTACT: oz.solomon@live.biu.ac.il or Eran.Eyal@sheba.health.gov.il. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. A streaming-based solution for remote visualization of 3D graphics on mobile devices.

    PubMed

    Lamberti, Fabrizio; Sanna, Andrea

    2007-01-01

    Mobile devices such as Personal Digital Assistants, Tablet PCs, and cellular phones have greatly enhanced user capability to connect to remote resources. Although a large set of applications are now available bridging the gap between desktop and mobile devices, visualization of complex 3D models is still a task hard to accomplish without specialized hardware. This paper proposes a system where a cluster of PCs, equipped with accelerated graphics cards managed by the Chromium software, is able to handle remote visualization sessions based on MPEG video streaming involving complex 3D models. The proposed framework allows mobile devices such as smart phones, Personal Digital Assistants (PDAs), and Tablet PCs to visualize objects consisting of millions of textured polygons and voxels at a frame rate of 30 fps or more depending on hardware resources at the server side and on multimedia capabilities at the client side. The server is able to concurrently manage multiple clients computing a video stream for each one; resolution and quality of each stream is tailored according to screen resolution and bandwidth of the client. The paper investigates in depth issues related to latency time, bit rate and quality of the generated stream, screen resolutions, as well as frames per second displayed.

  20. Visualizing Science Dissections in 3D: Contextualizing Student Responses to Multidimensional Learning Materials in Science Dissections

    NASA Astrophysics Data System (ADS)

    Walker, Robin Annette

    A series of dissection tasks was developed in this mixed-methods study of student self-explanations of their learning using actual and virtual multidimensional science dissections and visuo-spatial instruction. Thirty-five seventh-grade students from a science classroom (N = 20 Female/15 Male, Age =13 years) were assigned to three dissection environments instructing them to: (a) construct static paper designs of frogs, (b) perform active dissections with formaldehyde specimens, and (c) engage with interactive 3D frog visualizations and virtual simulations. This multi-methods analysis of student engagement with anchored dissection materials found learning gains on labeling exercises and lab assessments among most students. Data revealed that students who correctly utilized multimedia text and diagrams, individually and collaboratively, manipulated 3D tools more effectively and were better able to self-explain and complete their dissection work. Student questionnaire responses corroborated that they preferred learning how to dissect a frog using 3D multimedia instruction. The data were used to discuss the impact of 3D technologies, programs, and activities on student learning, spatial reasoning, and their interest in science. Implications were drawn regarding how to best integrate 3D visualizations into science curricula as innovative learning options for students, as instructional alternatives for teachers, and as mandated dissection choices for those who object to physical dissections in schools.

  1. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  2. A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data.

    PubMed

    Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J

    2014-10-01

    This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. 3D Flow visualization in virtual reality

    NASA Astrophysics Data System (ADS)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  4. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography.

    PubMed

    Li, Qiong; Gluch, Jürgen; Krüger, Peter; Gall, Martin; Neinhuis, Christoph; Zschech, Ehrenfried

    2016-10-14

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have a direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Visual detection of particulates in x-ray images of processed meat products

    NASA Astrophysics Data System (ADS)

    Schatzki, Thomas F.; Young, Richard; Haff, Ron P.; Eye, J.; Wright, G.

    1996-08-01

    A study was conducted to test the efficacy of detecting particulate contaminants in processed meat samples by visual observation of line-scanned x-ray images. Six hundred field- collected processed-product samples were scanned at 230 cm2/s using 0.5 X 0.5-mm resolution and 50 kV, 13 mA excitation. The x-ray images were image corrected, digitally stored, and inspected off-line, using interactive image enhancement. Forty percent of the samples were spiked with added contaminants to establish the visual recognition of contaminants as a function of sample thickness (1 to 10 cm), texture of the x-ray image (smooth/textured), spike composition (wood/bone/glass), size (0.1 to 0.4 cm), and shape (splinter/round). The results were analyzed using a maximum likelihood logistic regression method. In packages less than 6 cm thick, 0.2-cm-thick bone chips were easily recognized, 0.1-cm glass splinters were recognized with some difficulty, while 0.4-cm-thick wood was generally missed. Operational feasibility in a time-constrained setting was confirmed. One half percent of the samples arriving from the field contained bone slivers > 1 cm long, 1/2% contained metallic material, while 4% contained particulates exceeding 0.3 cm in size. All of the latter appeared to be bone fragments.

  6. Dynamic accommodative response to different visual stimuli (2D vs 3D) while watching television and while playing Nintendo 3DS console.

    PubMed

    Oliveira, Sílvia; Jorge, Jorge; González-Méijome, José M

    2012-09-01

    The aim of the present study was to compare the accommodative response to the same visual content presented in two dimensions (2D) and stereoscopically in three dimensions (3D) while participants were either watching a television (TV) or Nintendo 3DS console. Twenty-two university students, with a mean age of 20.3 ± 2.0 years (mean ± S.D.), were recruited to participate in the TV experiment and fifteen, with a mean age of 20.1 ± 1.5 years took part in the Nintendo 3DS console study. The accommodative response was measured using a Grand Seiko WAM 5500 autorefractor. In the TV experiment, three conditions were used initially: the film was viewed in 2D mode (TV2D without glasses), the same sequence was watched in 2D whilst shutter-glasses were worn (TV2D with glasses) and the sequence was viewed in 3D mode (TV3D). Measurements were taken for 5 min in each condition, and these sections were sub-divided into ten 30-s segments to examine changes within the film. In addition, the accommodative response to three points of different disparity of one 3D frame was assessed for 30 s. In the Nintendo experiment, two conditions were employed - 2D viewing and stereoscopic 3D viewing. In the TV experiment no statistically significant differences were found between the accommodative response with TV2D without glasses (-0.38 ± 0.32D, mean ± S.D.) and TV3D (-0.37 ± 0.34D). Also, no differences were found between the various segments of the film, or between the accommodative response to different points of one frame (p > 0.05). A significant difference (p = 0.015) was found, however, between the TV2D with (-0.32 ± 0.32D) and without glasses (-0.38 ± 0.32D). In the Nintendo experiment the accommodative responses obtained in modes 2D (-2.57 ± 0.30D) and 3D (-2.49 ± 0.28D) were significantly different (paired t-test p = 0.03). The need to use shutter-glasses may affect the accommodative response during the viewing of displays, and the accommodative response when playing

  7. 360-degree 3D transvaginal ultrasound system for high-dose-rate interstitial gynaecological brachytherapy needle guidance

    NASA Astrophysics Data System (ADS)

    Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron

    2017-03-01

    Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.

  8. Antigenic and 3D structural characterization of soluble X4 and hybrid X4-R5 HIV-1 Env trimers

    PubMed Central

    2014-01-01

    Background HIV-1 is decorated with trimeric glycoprotein spikes that enable infection by engaging CD4 and a chemokine coreceptor, either CCR5 or CXCR4. The variable loop 3 (V3) of the HIV-1 envelope protein (Env) is the main determinant for coreceptor usage. The predominant CCR5 using (R5) HIV-1 Env has been intensively studied in function and structure, whereas the trimeric architecture of the less frequent, but more cytopathic CXCR4 using (X4) HIV-1 Env is largely unknown, as are the consequences of sequence changes in and near V3 on antigenicity and trimeric Env structure. Results Soluble trimeric gp140 Env constructs were used as immunogenic mimics of the native spikes to analyze their antigenic properties in the context of their overall 3D structure. We generated soluble, uncleaved, gp140 trimers from a prototypic T-cell line-adapted (TCLA) X4 HIV-1 strain (NL4-3) and a hybrid (NL4-3/ADA), in which the V3 spanning region was substituted with that from the primary R5 isolate ADA. Compared to an ADA (R5) gp140, the NL4-3 (X4) construct revealed an overall higher antibody accessibility, which was most pronounced for the CD4 binding site (CD4bs), but also observed for mAbs against CD4 induced (CD4i) epitopes and gp41 mAbs. V3 mAbs showed significant binding differences to the three constructs, which were refined by SPR analysis. Of interest, the NL4-3/ADA construct with the hybrid NL4-3/ADA CD4bs showed impaired CD4 and CD4bs mAb reactivity despite the presence of the essential elements of the CD4bs epitope. We obtained 3D reconstructions of the NL4-3 and the NL4-3/ADA gp140 trimers via electron microscopy and single particle analysis, which indicates that both constructs inherit a propeller-like architecture. The first 3D reconstruction of an Env construct from an X4 TCLA HIV-1 strain reveals an open conformation, in contrast to recently published more closed structures from R5 Env. Exchanging the X4 V3 spanning region for that of R5 ADA did not alter the open

  9. View-Based Models of 3D Object Recognition and Class-Specific Invariance

    DTIC Science & Technology

    1994-04-01

    underlie recognition of geon-like com- ponents (see Edelman, 1991 and Biederman , 1987 ). I(X -_ ta)II1y = (X - ta)TWTW(x -_ ta) (3) View-invariant features...Institute of Technology, 1993. neocortex. Biological Cybernetics, 1992. 14] I. Biederman . Recognition by components: a theory [20] B. Olshausen, C...Anderson, and D. Van Essen. A of human image understanding. Psychol. Review, neural model of visual attention and invariant pat- 94:115-147, 1987 . tern

  10. A topological framework for interactive queries on 3D models in the Web.

    PubMed

    Figueiredo, Mauro; Rodrigues, José I; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications.

  11. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  12. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad

    2017-03-01

    Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm  ±  0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.

  13. Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space.

    PubMed

    Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus

    2017-05-01

    Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.

  14. Art-Science-Technology collaboration through immersive, interactive 3D visualization

    NASA Astrophysics Data System (ADS)

    Kellogg, L. H.

    2014-12-01

    At the W. M. Keck Center for Active Visualization in Earth Sciences (KeckCAVES), a group of geoscientists and computer scientists collaborate to develop and use of interactive, immersive, 3D visualization technology to view, manipulate, and interpret data for scientific research. The visual impact of immersion in a CAVE environment can be extremely compelling, and from the outset KeckCAVES scientists have collaborated with artists to bring this technology to creative works, including theater and dance performance, installations, and gamification. The first full-fledged collaboration designed and produced a performance called "Collapse: Suddenly falling down", choreographed by Della Davidson, which investigated the human and cultural response to natural and man-made disasters. Scientific data (lidar scans of disaster sites, such as landslides and mine collapses) were fully integrated into the performance by the Sideshow Physical Theatre. This presentation will discuss both the technological and creative characteristics of, and lessons learned from the collaboration. Many parallels between the artistic and scientific process emerged. We observed that both artists and scientists set out to investigate a topic, solve a problem, or answer a question. Refining that question or problem is an essential part of both the creative and scientific workflow. Both artists and scientists seek understanding (in this case understanding of natural disasters). Differences also emerged; the group noted that the scientists sought clarity (including but not limited to quantitative measurements) as a means to understanding, while the artists embraced ambiguity, also as a means to understanding. Subsequent art-science-technology collaborations have responded to evolving technology for visualization and include gamification as a means to explore data, and use of augmented reality for informal learning in museum settings.

  15. MorphoGraphX: A platform for quantifying morphogenesis in 4D

    PubMed Central

    Barbier de Reuille, Pierre; Routier-Kierzkowska, Anne-Lise; Kierzkowski, Daniel; Bassel, George W; Schüpbach, Thierry; Tauriello, Gerardo; Bajpai, Namrata; Strauss, Sören; Weber, Alain; Kiss, Annamaria; Burian, Agata; Hofhuis, Hugo; Sapala, Aleksandra; Lipowczan, Marcin; Heimlicher, Maria B; Robinson, Sarah; Bayer, Emmanuelle M; Basler, Konrad; Koumoutsakos, Petros; Roeder, Adrienne HK; Aegerter-Wilmsen, Tinri; Nakayama, Naomi; Tsiantis, Miltos; Hay, Angela; Kwiatkowska, Dorota; Xenarios, Ioannis; Kuhlemeier, Cris; Smith, Richard S

    2015-01-01

    Morphogenesis emerges from complex multiscale interactions between genetic and mechanical processes. To understand these processes, the evolution of cell shape, proliferation and gene expression must be quantified. This quantification is usually performed either in full 3D, which is computationally expensive and technically challenging, or on 2D planar projections, which introduces geometrical artifacts on highly curved organs. Here we present MorphoGraphX (www.MorphoGraphX.org), a software that bridges this gap by working directly with curved surface images extracted from 3D data. In addition to traditional 3D image analysis, we have developed algorithms to operate on curved surfaces, such as cell segmentation, lineage tracking and fluorescence signal quantification. The software's modular design makes it easy to include existing libraries, or to implement new algorithms. Cell geometries extracted with MorphoGraphX can be exported and used as templates for simulation models, providing a powerful platform to investigate the interactions between shape, genes and growth. DOI: http://dx.doi.org/10.7554/eLife.05864.001 PMID:25946108

  16. Interactive 3D visualization of structural changes in the brain of a person with corticobasal syndrome

    PubMed Central

    Hänel, Claudia; Pieperhoff, Peter; Hentschel, Bernd; Amunts, Katrin; Kuhlen, Torsten

    2014-01-01

    The visualization of the progression of brain tissue loss in neurodegenerative diseases like corticobasal syndrome (CBS) can provide not only information about the localization and distribution of the volume loss, but also helps to understand the course and the causes of this neurodegenerative disorder. The visualization of such medical imaging data is often based on 2D sections, because they show both internal and external structures in one image. Spatial information, however, is lost. 3D visualization of imaging data is capable to solve this problem, but it faces the difficulty that more internally located structures may be occluded by structures near the surface. Here, we present an application with two designs for the 3D visualization of the human brain to address these challenges. In the first design, brain anatomy is displayed semi-transparently; it is supplemented by an anatomical section and cortical areas for spatial orientation, and the volumetric data of volume loss. The second design is guided by the principle of importance-driven volume rendering: A direct line-of-sight to the relevant structures in the deeper parts of the brain is provided by cutting out a frustum-like piece of brain tissue. The application was developed to run in both, standard desktop environments and in immersive virtual reality environments with stereoscopic viewing for improving the depth perception. We conclude, that the presented application facilitates the perception of the extent of brain degeneration with respect to its localization and affected regions. PMID:24847243

  17. Magnetic assembly of 3D cell clusters: visualizing the formation of an engineered tissue.

    PubMed

    Ghosh, S; Kumar, S R P; Puri, I K; Elankumaran, S

    2016-02-01

    Contactless magnetic assembly of cells into 3D clusters has been proposed as a novel means for 3D tissue culture that eliminates the need for artificial scaffolds. However, thus far its efficacy has only been studied by comparing expression levels of generic proteins. Here, it has been evaluated by visualizing the evolution of cell clusters assembled by magnetic forces, to examine their resemblance to in vivo tissues. Cells were labeled with magnetic nanoparticles, then assembled into 3D clusters using magnetic force. Scanning electron microscopy was used to image intercellular interactions and morphological features of the clusters. When cells were held together by magnetic forces for a single day, they formed intercellular contacts through extracellular fibers. These kept the clusters intact once the magnetic forces were removed, thus serving the primary function of scaffolds. The cells self-organized into constructs consistent with the corresponding tissues in vivo. Epithelial cells formed sheets while fibroblasts formed spheroids and exhibited position-dependent morphological heterogeneity. Cells on the periphery of a cluster were flattened while those within were spheroidal, a well-known characteristic of connective tissues in vivo. Cells assembled by magnetic forces presented visual features representative of their in vivo states but largely absent in monolayers. This established the efficacy of contactless assembly as a means to fabricate in vitro tissue models. © 2016 John Wiley & Sons Ltd.

  18. Value of PET/CT 3D visualization of head and neck squamous cell carcinoma extended to mandible.

    PubMed

    Lopez, R; Gantet, P; Julian, A; Hitzel, A; Herbault-Barres, B; Alshehri, S; Payoux, P

    2018-05-01

    To study an original 3D visualization of head and neck squamous cell carcinoma extending to the mandible by using [18F]-NaF PET/CT and [18F]-FDG PET/CT imaging along with a new innovative FDG and NaF image analysis using dedicated software. The main interest of the 3D evaluation is to have a better visualization of bone extension in such cancers and that could also avoid unsatisfying surgical treatment later on. A prospective study was carried out from November 2016 to September 2017. Twenty patients with head and neck squamous cell carcinoma extending to the mandible (stage 4 in the UICC classification) underwent [18F]-NaF and [18F]-FDG PET/CT. We compared the delineation of 3D quantification obtained with [18F]-NaF and [18F]-FDG PET/CT. In order to carry out this comparison, a method of visualisation and quantification of PET images was developed. This new approach was based on a process of quantification of radioactive activity within the mandibular bone that objectively defined the significant limits of this activity on PET images and on a 3D visualization. Furthermore, the spatial limits obtained by analysis of the PET/CT 3D images were compared to those obtained by histopathological examination of mandibular resection which confirmed intraosseous extension to the mandible. The [18F]-NaF PET/CT imaging confirmed the mandibular extension in 85% of cases and was not shown in [18F]-FDG PET/CT imaging. The [18F]-NaF PET/CT was significantly more accurate than [18F]-FDG PET/CT in 3D assessment of intraosseous extension of head and neck squamous cell carcinoma. This new 3D information shows the importance in the imaging approach of cancers. All cases of mandibular extension suspected on [18F]-NaF PET/CT imaging were confirmed based on histopathological results as a reference. The [18F]-NaF PET/CT 3D visualization should be included in the pre-treatment workups of head and neck cancers. With the use of a dedicated software which enables objective delineation of

  19. Regional subsidence history and 3D visualization with MATLAB of the Vienna Basin, central Europe

    NASA Astrophysics Data System (ADS)

    Lee, E.; Novotny, J.; Wagreich, M.

    2013-12-01

    This study reconstructed the subsidence history by the backstripping and 3D visualization techniques, to understand tectonic evolution of the Neogene Vienna Basin. The backstripping removes the compaction effect of sediment loading and quantifies the tectonic subsidence. The amount of decompaction was calculated by porosity-depth relationships evaluated from seismic velocity data acquired from two boreholes. About 100 wells have been investigated to quantify the subsidence history of the Vienna Basin. The wells have been sorted into 10 groups; N1-4 in the northern part, C1-4 in the central part and L1-2 in the northernmost and easternmost parts, based on their position within the same block bordered by major faults. To visualize 3D subsidence maps, the wells were arranged to a set of 3D points based on their map location (x, y) and depths (z1, z2, z3 ...). The division of the stratigraphic column and age range was arranged based on the Central Paratethys regional Stages. In this study, MATLAB, a numerical computing environment, was used to calculate the TPS interpolation function. The Thin-Plate Spline (TPS) can be employed to reconstruct a smooth surface from a set of 3D points. The basic physical model of the TPS is based on the bending behavior of a thin metal sheet that is constrained only by a sparse set of fixed points. In the Lower Miocene, 3D subsidence maps show strong evidence that the pre-Neogene basement of the Vienna Basin was subsiding along borders of the Alpine-Carpathian nappes. This subsidence event is represented by a piggy-back basin developed on top of the NW-ward moving thrust sheets. In the late Lower Miocene, Group C and N display a typical subsidence pattern for the pull-apart basin with a very high subsidence event (0.2 - 1.0 km/Ma). After the event, Group N shows remarkably decreasing subsidence, following the thin-skinned extension which was regarded as the extension model of the Vienna Basin in the literature. But the subsidence in

  20. Visualization and 3D Reconstruction of Flame Cells of Taenia solium (Cestoda)

    PubMed Central

    Valverde-Islas, Laura E.; Arrangoiz, Esteban; Vega, Elio; Robert, Lilia; Villanueva, Rafael; Reynoso-Ducoing, Olivia; Willms, Kaethe; Zepeda-Rodríguez, Armando; Fortoul, Teresa I.; Ambrosio, Javier R.

    2011-01-01

    Background Flame cells are the terminal cells of protonephridial systems, which are part of the excretory systems of invertebrates. Although the knowledge of their biological role is incomplete, there is a consensus that these cells perform excretion/secretion activities. It has been suggested that the flame cells participate in the maintenance of the osmotic environment that the cestodes require to live inside their hosts. In live Platyhelminthes, by light microscopy, the cells appear beating their flames rapidly and, at the ultrastructural, the cells have a large body enclosing a tuft of cilia. Few studies have been performed to define the localization of the cytoskeletal proteins of these cells, and it is unclear how these proteins are involved in cell function. Methodology/Principal Findings Parasites of two different developmental stages of T. solium were used: cysticerci recovered from naturally infected pigs and intestinal adults obtained from immunosuppressed and experimentally infected golden hamsters. Hamsters were fed viable cysticerci to recover adult parasites after one month of infection. In the present studies focusing on flame cells of cysticerci tissues was performed. Using several methods such as video, confocal and electron microscopy, in addition to computational analysis for reconstruction and modeling, we have provided a 3D visual rendition of the cytoskeletal architecture of Taenia solium flame cells. Conclusions/Significance We consider that visual representations of cells open a new way for understanding the role of these cells in the excretory systems of Platyhelminths. After reconstruction, the observation of high resolution 3D images allowed for virtual observation of the interior composition of cells. A combination of microscopic images, computational reconstructions and 3D modeling of cells appears to be useful for inferring the cellular dynamics of the flame cell cytoskeleton. PMID:21412407

  1. Integrating Defense, Diplomacy, and Development (3 D) in the Naval Special Warfare Operator

    DTIC Science & Technology

    2010-12-01

    DIPLOMACY, AND DEVELOPMENT (3 D) IN THE NAVAL SPECIAL WARFARE OPERATOR by William Fiack William Roberts Tim Sulick December 2010...Development (3 D) in the Naval Special Warfare Operator 6. AUTHOR(S) William Fiack, William Roberts, Timothy Sulick 5. FUNDING NUMBERS 7. PERFORMING...MONITORING AGENCY NAME(S) AND ADDRESS(ES) N/A 10. SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in

  2. N=2 Minimal Conformal Field Theories and Matrix Bifactorisations of x d

    NASA Astrophysics Data System (ADS)

    Davydov, Alexei; Camacho, Ana Ros; Runkel, Ingo

    2018-01-01

    We establish an action of the representations of N = 2-superconformal symmetry on the category of matrix factorisations of the potentials x d and x d - y d , for d odd. More precisely we prove a tensor equivalence between (a) the category of Neveu-Schwarz-type representations of the N = 2 minimal super vertex operator algebra at central charge 3-6/d, and (b) a full subcategory of graded matrix factorisations of the potential x d - y d . The subcategory in (b) is given by permutation-type matrix factorisations with consecutive index sets. The physical motivation for this result is the Landau-Ginzburg/conformal field theory correspondence, where it amounts to the equivalence of a subset of defects on both sides of the correspondence. Our work builds on results by Brunner and Roggenkamp [BR], where an isomorphism of fusion rules was established.

  3. X-ray imaging and 3D reconstruction of in-flight exploding foil initiator flyers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willey, T. M., E-mail: willey1@llnl.gov; Champley, K., E-mail: champley1@llnl.gov; Hodgin, R.

    Exploding foil initiators (EFIs), also known as slapper initiators or detonators, offer clear safety and timing advantages over other means of initiating detonation in high explosives. This work outlines a new capability for imaging and reconstructing three-dimensional images of operating EFIs. Flyer size and intended velocity were chosen based on parameters of the imaging system. The EFI metal plasma and plastic flyer traveling at 2.5 km/s were imaged with short ∼80 ps pulses spaced 153.4 ns apart. A four-camera system acquired 4 images from successive x-ray pulses from each shot. The first frame was prior to bridge burst, the 2nd images themore » flyer about 0.16 mm above the surface but edges of the foil and/or flyer are still attached to the substrate. The 3rd frame captures the flyer in flight, while the 4th shows a completely detached flyer in a position that is typically beyond where slappers strike initiating explosives. Multiple acquisitions at different incident angles and advanced computed tomography reconstruction algorithms were used to produce a 3-dimensional image of the flyer at 0.16 and 0.53 mm above the surface. Both the x-ray images and the 3D reconstruction show a strong anisotropy in the shape of the flyer and underlying foil parallel vs. perpendicular to the initiating current and electrical contacts. These results provide detailed flyer morphology during the operation of the EFI.« less

  4. Registering 2D and 3D imaging data of bone during healing.

    PubMed

    Hoerth, Rebecca M; Baum, Daniel; Knötel, David; Prohaska, Steffen; Willie, Bettina M; Duda, Georg N; Hege, Hans-Christian; Fratzl, Peter; Wagermaier, Wolfgang

    2015-04-01

    PURPOSE/AIMS OF THE STUDY: Bone's hierarchical structure can be visualized using a variety of methods. Many techniques, such as light and electron microscopy generate two-dimensional (2D) images, while micro-computed tomography (µCT) allows a direct representation of the three-dimensional (3D) structure. In addition, different methods provide complementary structural information, such as the arrangement of organic or inorganic compounds. The overall aim of the present study is to answer bone research questions by linking information of different 2D and 3D imaging techniques. A great challenge in combining different methods arises from the fact that they usually reflect different characteristics of the real structure. We investigated bone during healing by means of µCT and a couple of 2D methods. Backscattered electron images were used to qualitatively evaluate the tissue's calcium content and served as a position map for other experimental data. Nanoindentation and X-ray scattering experiments were performed to visualize mechanical and structural properties. We present an approach for the registration of 2D data in a 3D µCT reference frame, where scanning electron microscopies serve as a methodic link. Backscattered electron images are perfectly suited for registration into µCT reference frames, since both show structures based on the same physical principles. We introduce specific registration tools that have been developed to perform the registration process in a semi-automatic way. By applying this routine, we were able to exactly locate structural information (e.g. mineral particle properties) in the 3D bone volume. In bone healing studies this will help to better understand basic formation, remodeling and mineralization processes.

  5. PINS-3X Operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.H. Seabury

    2013-09-01

    Idaho National Laboratory’s (INL’s) Portable Isotopic Neutron Spectroscopy System (PINS) non-intrusively identifies the chemical fill of munitions and sealed containers. The PINS-3X variant of the system is used to identify explosives and uses a deuterium-tritium (DT) electronic neutron generator (ENG) as the neutron source. Use of the system, including possession and use of the neutron generator and shipment of the system components requires compliance with a number of regulations. This report outlines some of these requirements as well as some of the requirements in using the system outside of INL.

  6. Target surface finding using 3D SAR data

    NASA Astrophysics Data System (ADS)

    Ruiter, Jason R.; Burns, Joseph W.; Subotic, Nikola S.

    2005-05-01

    Methods of generating more literal, easily interpretable imagery from 3-D SAR data are being studied to provide all weather, near-visual target identification and/or scene interpretation. One method of approaching this problem is to automatically generate shape-based geometric renderings from the SAR data. In this paper we describe the application of the Marching Tetrahedrons surface finding algorithm to 3-D SAR data. The Marching Tetrahedrons algorithm finds a surface through the 3-D data cube, which provides a recognizable representation of the target surface. This algorithm was applied to the public-release X-patch simulations of a backhoe, which provided densely sampled 3-D SAR data sets. The performance of the algorithm to noise and spatial resolution were explored. Surface renderings were readily recognizable over a range of spatial resolution, and maintained their fidelity even under relatively low Signal-to-Noise Ratio (SNR) conditions.

  7. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  8. 3D Radiative Transfer in Eta Carinae: Application of the SimpleX Algorithm to 3D SPH Simulations of Binary Colliding Winds

    NASA Technical Reports Server (NTRS)

    Clementel, N.; Madura, T. I.; Kruip, C. J. H.; Icke, V.; Gull, T. R.

    2014-01-01

    Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in Eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in Eta Car. We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form. We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for Eta Car.

  9. 3D Radiative Transfer in Eta Carinae: Application of the SimpleX Algorithm to 3D SPH Simulations of Binary Colliding Winds

    NASA Technical Reports Server (NTRS)

    Clementel, N.; Madura, T. I.; Kruip, C.J.H.; Icke, V.; Gull, T. R.

    2014-01-01

    Eta Carinae is an ideal astrophysical laboratory for studying massive binary interactions and evolution, and stellar wind-wind collisions. Recent three-dimensional (3D) simulations set the stage for understanding the highly complex 3D flows in eta Car. Observations of different broad high- and low-ionization forbidden emission lines provide an excellent tool to constrain the orientation of the system, the primary's mass-loss rate, and the ionizing flux of the hot secondary. In this work we present the first steps towards generating synthetic observations to compare with available and future HST/STIS data. We present initial results from full 3D radiative transfer simulations of the interacting winds in eta Car.We use the SimpleX algorithm to post-process the output from 3D SPH simulations and obtain the ionization fractions of hydrogen and helium assuming three different mass-loss rates for the primary star. The resultant ionization maps of both species constrain the regions where the observed forbidden emission lines can form. Including collisional ionization is necessary to achieve a better description of the ionization states, especially in the areas shielded from the secondary's radiation. We find that reducing the primary's mass-loss rate increases the volume of ionized gas, creating larger areas where the forbidden emission lines can form.We conclude that post processing 3D SPH data with SimpleX is a viable tool to create ionization maps for eta Car.

  10. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  11. PLOT3D/AMES, GENERIC UNIX VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  12. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  13. 3D nanoscale imaging of the yeast, Schizosaccharomyces pombe, by full-field transmission X-ray microscopy at 5.4 keV.

    PubMed

    Chen, Jie; Yang, Yunhao; Zhang, Xiaobo; Andrews, Joy C; Pianetta, Piero; Guan, Yong; Liu, Gang; Xiong, Ying; Wu, Ziyu; Tian, Yangchao

    2010-07-01

    Three-dimensional (3D) nanoscale structures of the fission yeast, Schizosaccharomyces pombe, can be obtained by full-field transmission hard X-ray microscopy with 30 nm resolution using synchrotron radiation sources. Sample preparation is relatively simple and the samples are portable across various imaging environments, allowing for high-throughput sample screening. The yeast cells were fixed and double-stained with Reynold's lead citrate and uranyl acetate. We performed both absorption contrast and Zernike phase contrast imaging on these cells in order to test this method. The membranes, nucleus, and subcellular organelles of the cells were clearly visualized using absorption contrast mode. The X-ray images of the cells could be used to study the spatial distributions of the organelles in the cells. These results show unique structural information, demonstrating that hard X-ray microscopy is a complementary method for imaging and analyzing biological samples.

  14. 3D nanoscale imaging of the yeast, Schizosaccharomyces pombe, by full-field transmission x-ray microscopy at 5.4 keV

    PubMed Central

    Chen, Jie; Yang, Yunhao; Zhang, Xiaobo; Andrews, Joy C.; Pianetta, Piero; Guan, Yong; Liu, Gang; Xiong, Ying; Wu, Ziyu; Tian, Yangchao

    2010-01-01

    Three-dimensional (3D) nanoscale structures of the fission yeast, Schizosaccharomyces pombe, can be obtained by full-field transmission hard x-ray microscopy with 30 nm resolution using synchrotron radiation sources. Sample preparation is relatively simple and the samples are portable across various imaging environments, allowing for high throughput sample screening. The yeast cells were fixed and double stained with Reynold’s lead citrate and uranyl acetate. We performed both absorption contrast and Zernike phase contrast imaging on these cells in order to test this method. The membranes, nucleus and subcellular organelles of the cells were clearly visualized using absorption contrast mode. The x-ray images of the cells could be used to study the spatial distributions of the organelles in the cells. These results show unique structural information, demonstrating that hard x-ray microscopy is a complementary method for imaging and analyzing biological samples. PMID:20349228

  15. Pre-operative simulation of pediatric mastoid surgery with 3D-printed temporal bone models.

    PubMed

    Rose, Austin S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Rawal, Rounak B; Iseli, Claire E

    2015-05-01

    As the process of additive manufacturing, or three-dimensional (3D) printing, has become more practical and affordable, a number of applications for the technology in the field of pediatric otolaryngology have been considered. One area of promise is temporal bone surgical simulation. Having previously developed a model for temporal bone surgical training using 3D printing, we sought to produce a patient-specific model for pre-operative simulation in pediatric otologic surgery. Our hypothesis was that the creation and pre-operative dissection of such a model was possible, and would demonstrate potential benefits in cases of abnormal temporal bone anatomy. In the case presented, an 11-year-old boy underwent a planned canal-wall-down (CWD) tympano-mastoidectomy for recurrent cholesteatoma preceded by a pre-operative surgical simulation using 3D-printed models of the temporal bone. The models were based on the child's pre-operative clinical CT scan and printed using multiple materials to simulate both bone and soft tissue structures. To help confirm the models as accurate representations of the child's anatomy, distances between various anatomic landmarks were measured and compared to the temporal bone CT scan and the 3D model. The simulation allowed the surgical team to appreciate the child's unusual temporal bone anatomy as well as any challenges that might arise in the safety of the temporal bone laboratory, prior to actual surgery in the operating room (OR). There was minimal variability, in terms of absolute distance (mm) and relative distance (%), in measurements between anatomic landmarks obtained from the patient intra-operatively, the pre-operative CT scan and the 3D-printed models. Accurate 3D temporal bone models can be rapidly produced based on clinical CT scans for pre-operative simulation of specific challenging otologic cases in children, potentially reducing medical errors and improving patient safety. Copyright © 2015 Elsevier Ireland Ltd. All rights

  16. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  17. 3D optical coherence tomography image registration for guiding cochlear implant insertion

    NASA Astrophysics Data System (ADS)

    Cheon, Gyeong-Woo; Jeong, Hyun-Woo; Chalasani, Preetham; Chien, Wade W.; Iordachita, Iulian; Taylor, Russell; Niparko, John; Kang, Jin U.

    2014-03-01

    In cochlear implant surgery, an electrode array is inserted into the cochlear canal to restore hearing to a person who is profoundly deaf or significantly hearing impaired. One critical part of the procedure is the insertion of the electrode array, which looks like a thin wire, into the cochlear canal. Although X-ray or computed tomography (CT) could be used as a reference to evaluate the pathway of the whole electrode array, there is no way to depict the intra-cochlear canal and basal turn intra-operatively to help guide insertion of the electrode array. Optical coherent tomography (OCT) is a highly effective way of visualizing internal structures of cochlea. Swept source OCT (SSOCT) having center wavelength of 1.3 micron and 2D Galvonometer mirrors was used to achieve 7-mm depth 3-D imaging. Graphics processing unit (GPU), OpenGL, C++ and C# were integrated for real-time volumetric rendering simultaneously. The 3D volume images taken by the OCT system were assembled and registered which could be used to guide a cochlear implant. We performed a feasibility study using both dry and wet temporal bones and the result is presented.

  18. Approaches to 3D printing teeth from X-ray microtomography.

    PubMed

    Cresswell-Boyes, A J; Barber, A H; Mills, D; Tatla, A; Davis, G R

    2018-06-28

    Artificial teeth have several advantages in preclinical training. The aim of this study is to three-dimensionally (3D) print accurate artificial teeth using scans from X-ray microtomography (XMT). Extracted and artificial teeth were imaged at 90 kV and 40 kV, respectively, to create detailed high contrast scans. The dataset was visualised to produce internal and external meshes subsequently exported to 3D modelling software for modification before finally sending to a slicing program for printing. After appropriate parameter setting, the printer deposited material in specific locations layer by layer, to create a 3D physical model. Scans were manipulated to ensure a clean model was imported into the slicing software, where layer height replicated the high spatial resolution that was observed in the XMT scans. The model was then printed in two different materials (polylactic acid and thermoplastic elastomer). A multimaterial print was created to show the different physical characteristics between enamel and dentine. © 2018 The Authors Journal of Microscopy © 2018 Royal Microscopical Society.

  19. Applications of 3D visualization : peer exchange summary report : Raleigh, North Carolina July 8-9, 2009

    DOT National Transportation Integrated Search

    2009-11-01

    This report provides a summary of a 1.5-day peer exchange held in July 2009 focusing on select transportation agencies' applications of 3D visualization techniques. FHWA's Office of Interstate and Border Planning sponsored the peer exchange.

  20. 3D printing X-Ray Quality Control Phantoms. A Low Contrast Paradigm

    NASA Astrophysics Data System (ADS)

    Kapetanakis, I.; Fountos, G.; Michail, C.; Valais, I.; Kalyvas, N.

    2017-11-01

    Current 3D printing technology products may be usable in various biomedical applications. Such an application is the creation of X-ray quality control phantoms. In this work a self-assembled 3D printer (geeetech i3) was used for the design of a simple low contrast phantom. The printing material was Polylactic Acid (PLA) (100% printing density). Low contrast scheme was achieved by creating air-holes with different diameters and thicknesses, ranging from 1mm to 9mm. The phantom was irradiated at a Philips Diagnost 93 fluoroscopic installation at 40kV-70kV with the semi-automatic mode. The images were recorded with an Agfa cr30-x CR system and assessed with ImageJ software. The best contrast value observed was approximately 33%. In low contrast detectability check it was found that the 1mm diameter hole was always visible, for thickness larger or equal to 4mm. A reason for not being able to distinguish 1mm in smaller thicknesses might be the presence of printing patterns on the final image, which increased the structure noise. In conclusion the construction of a contrast resolution phantom with a 3D printer is feasible. The quality of the final product depends upon the printer accuracy and the material characteristics.

  1. CheS-Mapper - Chemical Space Mapping and Visualization in 3D.

    PubMed

    Gütlein, Martin; Karwath, Andreas; Kramer, Stefan

    2012-03-17

    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis.

  2. CheS-Mapper - Chemical Space Mapping and Visualization in 3D

    PubMed Central

    2012-01-01

    Analyzing chemical datasets is a challenging task for scientific researchers in the field of chemoinformatics. It is important, yet difficult to understand the relationship between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects. To that respect, visualization tools can help to better comprehend the underlying correlations. Our recently developed 3D molecular viewer CheS-Mapper (Chemical Space Mapper) divides large datasets into clusters of similar compounds and consequently arranges them in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kind of features, like structural fragments as well as quantitative chemical descriptors. These features can be highlighted within CheS-Mapper, which aids the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. As a final function, the tool can also be used to select and export specific subsets of a given dataset for further analysis. PMID:22424447

  3. A non-disruptive technology for robust 3D tool tracking for ultrasound-guided interventions.

    PubMed

    Mung, Jay; Vignon, Francois; Jain, Ameet

    2011-01-01

    In the past decade ultrasound (US) has become the preferred modality for a number of interventional procedures, offering excellent soft tissue visualization. The main limitation however is limited visualization of surgical tools. A new method is proposed for robust 3D tracking and US image enhancement of surgical tools under US guidance. Small US sensors are mounted on existing surgical tools. As the imager emits acoustic energy, the electrical signal from the sensor is analyzed to reconstruct its 3D coordinates. These coordinates can then be used for 3D surgical navigation, similar to current day tracking systems. A system with real-time 3D tool tracking and image enhancement was implemented on a commercial ultrasound scanner and 3D probe. Extensive water tank experiments with a tracked 0.2mm sensor show robust performance in a wide range of imaging conditions and tool position/orientations. The 3D tracking accuracy was 0.36 +/- 0.16mm throughout the imaging volume of 55 degrees x 27 degrees x 150mm. Additionally, the tool was successfully tracked inside a beating heart phantom. This paper proposes an image enhancement and tool tracking technology with sub-mm accuracy for US-guided interventions. The technology is non-disruptive, both in terms of existing clinical workflow and commercial considerations, showing promise for large scale clinical impact.

  4. The Pore3D library package for the textural analysis of X-ray computed microtomographic images of rocks

    NASA Astrophysics Data System (ADS)

    Zandomeneghi, Daria; Mancini, Lucia; Voltolini, Marco; Brun, Francesco; Polacci, Margherita

    2010-05-01

    Many research fields in Geosciences require the knowledge of the three-dimensional (3D) texture of rocks. X-ray computed microtomography (μCT) supplies an effective method to directly acquire 3D information. Transmission X-ray μCT is a non-destructive technique based on the mapping of the linear attenuation coefficient of X-rays crossing the investigated sample. The 3D distribution of constituents and the contrast based on the different absorption properties of the components can be enhanced by phase-contrast imaging. On an X-ray tomographic dataset, if spatial resolution at the micron scale and proper software are available, a complete textural and morphological quantitative analysis can be carried out and a number of parameters can be extracted, including geometry and organization of discrete rock components (such as crystals, vesicles, fractures, alteration-compositional zones). In the case of volcanic rocks, μCT can be used to image and quantify the textural and morphological characteristics of the rock constituents, such as vesicles (gas bubbles in solidified, erupted products), crystals and glass fibers. For pyroclastic rocks, investigated parameters to characterize the vesicle portion are the size distribution, geometry and orientation of the pores, the pore-throat size and organization, the pore-surface roughness and the topology of the overall pore and pore-throat network. In this work we present several procedures able to extract quantitative information from CT images of volcanic rocks. The imaging experiments have been carried out at the Elettra Synchrotron Light Laboratory in Trieste (Italy) using both the synchrotron radiation at the SYRMEP beamline and a custom-developed μCT system, named TOMOLAB, equipped with a microfocus X-ray tube and based on a cone-beam geometry. The reconstructed 3D images (or volumes) have been elaborated with a software library, named Pore3D, custom-developed by the SYRMEP group at Elettra. The Pore3D software library

  5. Efficient feature-based 2D/3D registration of transesophageal echocardiography to x-ray fluoroscopy for cardiac interventions

    NASA Astrophysics Data System (ADS)

    Hatt, Charles R.; Speidel, Michael A.; Raval, Amish N.

    2014-03-01

    We present a novel 2D/ 3D registration algorithm for fusion between transesophageal echocardiography (TEE) and X-ray fluoroscopy (XRF). The TEE probe is modeled as a subset of 3D gradient and intensity point features, which facilitates efficient 3D-to-2D perspective projection. A novel cost-function, based on a combination of intensity and edge features, evaluates the registration cost value without the need for time-consuming generation of digitally reconstructed radiographs (DRRs). Validation experiments were performed with simulations and phantom data. For simulations, in silica XRF images of a TEE probe were generated in a number of different pose configurations using a previously acquired CT image. Random misregistrations were applied and our method was used to recover the TEE probe pose and compare the result to the ground truth. Phantom experiments were performed by attaching fiducial markers externally to a TEE probe, imaging the probe with an interventional cardiac angiographic x-ray system, and comparing the pose estimated from the external markers to that estimated from the TEE probe using our algorithm. Simulations found a 3D target registration error of 1.08(1.92) mm for biplane (monoplane) geometries, while the phantom experiment found a 2D target registration error of 0.69mm. For phantom experiments, we demonstrated a monoplane tracking frame-rate of 1.38 fps. The proposed feature-based registration method is computationally efficient, resulting in near real-time, accurate image based registration between TEE and XRF.

  6. A Topological Framework for Interactive Queries on 3D Models in the Web

    PubMed Central

    Figueiredo, Mauro; Rodrigues, José I.; Silvestre, Ivo; Veiga-Pires, Cristina

    2014-01-01

    Several technologies exist to create 3D content for the web. With X3D, WebGL, and X3DOM, it is possible to visualize and interact with 3D models in a web browser. Frequently, three-dimensional objects are stored using the X3D file format for the web. However, there is no explicit topological information, which makes it difficult to design fast algorithms for applications that require adjacency and incidence data. This paper presents a new open source toolkit TopTri (Topological model for Triangle meshes) for Web3D servers that builds the topological model for triangular meshes of manifold or nonmanifold models. Web3D client applications using this toolkit make queries to the web server to get adjacent and incidence information of vertices, edges, and faces. This paper shows the application of the topological information to get minimal local points and iso-lines in a 3D mesh in a web browser. As an application, we present also the interactive identification of stalactites in a cave chamber in a 3D web browser. Several tests show that even for large triangular meshes with millions of triangles, the adjacency and incidence information is returned in real time making the presented toolkit appropriate for interactive Web3D applications. PMID:24977236

  7. Occupational Analysis Products: Operations Management- AFSC 3E6X1 (CD-ROM)

    DTIC Science & Technology

    computer laser optical disc (CD-ROM); 4 3/4 in.; 23.4 MB. SYSTEMS DETAIL NOTE: ABSTRACT: This is a report of an occupational survey of the Operations ... Management (AFSC 3E6X1, OSSN 2560, Feb 04) career ladder, conducted by the Occupational Analysis Flight, AFOMS. The OSR reports the findings of current

  8. Data Assimilation of Lightning using 1D+3D/4D WRF Var Assimilation Schemes with Non-Linear Observation Operators

    NASA Astrophysics Data System (ADS)

    Navon, M. I.; Stefanescu, R.; Fuelberg, H. E.; Marchand, M.

    2012-12-01

    NASA's launch of the GOES-R Lightning Mapper (GLM) in 2015 will provide continuous, full disc, high resolution total lightning (IC + CG) data. The data will be available at a horizontal resolution of approximately 9 km. Compared to other types of data, the assimilation of lightning data into operational numerical models has received relatively little attention. Previous efforts of lightning assimilation mostly have employed nudging. This paper will describe the implementation of 1D+3D/4D Var assimilation schemes of existing ground-based WTLN (Worldwide Total Lightning Network) lightning observations using non-linear observation operators in the incremental WRFDA system. To mimic the expected output of GLM, the WTLN data were used to generate lightning super-observations characterized by flash rates/81 km2/20 min. A major difficulty associated with variational approaches is the complexity of the observation operator that defines the model equivalent of lightning. We use Convective Available Potential Energy (CAPE) as a proxy between lightning data and model variables. This operator is highly nonlinear. Marecal and Mahfouf (2003) have shown that nonlinearities can prevent direct assimilation of rainfall rates in the ECMWF 4D-VAR (using the incremental formulation proposed by Courtier et al. (1994)) from being successful. Using data from the 2011 Tuscaloosa, AL tornado outbreak, we have proved that the direct assimilation of lightning data into the WRF 3D/4D - Var systems is limited due to this incremental approach. Severe threshold limits must be imposed on the innovation vectors to obtain an improved analysis. We have implemented 1D+3D/4D Var schemes to assimilate lightning observations into the WRF model. Their use avoids innovation vector constrains from preventing the inclusion of a greater number of lightning observations Their use also minimizes the problem that nonlinearities in the moist convective scheme can introduce discontinuities in the cost function

  9. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  10. Make or Buy: An Analysis of the Impacts of 3D Printing Operations, 3D Laser Scanning Technology, and Collaborative Product Lifecycle Management on Ship Maintenance and Modernization Cost Savings

    DTIC Science & Technology

    2016-01-30

    SPONSORED REPORT SERIES Make or Buy: An Analysis of the Impacts of 3D Printing Operations, 3D Laser Scanning Technology, and Collaborative...Report Series Make or Buy: An Analysis of the Impacts of 3D Printing Operations, 3D Laser Scanning Technology, and Collaborative Product Lifecycle...Application Areas for 3D Printing ........................................................ 36 Figure 15. Potential Applications of 3D

  11. HFEM3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiss, Chester J

    Software solves the three-dimensional Poisson equation div(k(grad(u)) = f, by the finite element method for the case when material properties, k, are distributed over hierarchy of edges, facets and tetrahedra in the finite element mesh. Method is described in Weiss, CJ, Finite element analysis for model parameters distributed on a hierarchy of geometric simplices, Geophysics, v82, E155-167, doi:10.1190/GEO2017-0058.1 (2017). A standard finite element method for solving Poisson’s equation is augmented by including in the 3D stiffness matrix additional 2D and 1D stiffness matrices representing the contributions from material properties associated with mesh faces and edges, respectively. The resulting linear systemmore » is solved iteratively using the conjugate gradient method with Jacobi preconditioning. To minimize computer storage for program execution, the linear solver computes matrix-vector contractions element-by-element over the mesh, without explicit storage of the global stiffness matrix. Program output vtk compliant for visualization and rendering by 3rd party software. Program uses dynamic memory allocation and as such there are no hard limits on problem size outside of those imposed by the operating system and configuration on which the software is run. Dimension, N, of the finite element solution vector is constrained by the the addressable space in 32-vs-64 bit operating systems. Total storage requirements for the problem. Total working space required for the program is approximately 13*N double precision words.« less

  12. H-Ransac a Hybrid Point Cloud Segmentation Combining 2d and 3d Data

    NASA Astrophysics Data System (ADS)

    Adam, A.; Chatzilari, E.; Nikolopoulos, S.; Kompatsiaris, I.

    2018-05-01

    In this paper, we present a novel 3D segmentation approach operating on point clouds generated from overlapping images. The aim of the proposed hybrid approach is to effectively segment co-planar objects, by leveraging the structural information originating from the 3D point cloud and the visual information from the 2D images, without resorting to learning based procedures. More specifically, the proposed hybrid approach, H-RANSAC, is an extension of the well-known RANSAC plane-fitting algorithm, incorporating an additional consistency criterion based on the results of 2D segmentation. Our expectation that the integration of 2D data into 3D segmentation will achieve more accurate results, is validated experimentally in the domain of 3D city models. Results show that HRANSAC can successfully delineate building components like main facades and windows, and provide more accurate segmentation results compared to the typical RANSAC plane-fitting algorithm.

  13. Visualizing 3D data obtained from microscopy on the Internet.

    PubMed

    Pittet, J J; Henn, C; Engel, A; Heymann, J B

    1999-01-01

    The Internet is a powerful communication medium increasingly exploited by business and science alike, especially in structural biology and bioinformatics. The traditional presentation of static two-dimensional images of real-world objects on the limited medium of paper can now be shown interactively in three dimensions. Many facets of this new capability have already been developed, particularly in the form of VRML (virtual reality modeling language), but there is a need to extend this capability for visualizing scientific data. Here we introduce a real-time isosurfacing node for VRML, based on the marching cube approach, allowing interactive isosurfacing. A second node does three-dimensional (3D) texture-based volume-rendering for a variety of representations. The use of computers in the microscopic and structural biosciences is extensive, and many scientific file formats exist. To overcome the problem of accessing such data from VRML and other tools, we implemented extensions to SGI's IFL (image format library). IFL is a file format abstraction layer defining communication between a program and a data file. These technologies are developed in support of the BioImage project, aiming to establish a database prototype for multidimensional microscopic data with the ability to view the data within a 3D interactive environment. Copyright 1999 Academic Press.

  14. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  15. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    DTIC Science & Technology

    2006-10-01

    Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment

  16. Real-time 3D visualization of the thoraco-abdominal surface during breathing with body movement and deformation extraction.

    PubMed

    Povšič, K; Jezeršek, M; Možina, J

    2015-07-01

    Real-time 3D visualization of the breathing displacements can be a useful diagnostic tool in order to immediately observe the most active regions on the thoraco-abdominal surface. The developed method is capable of separating non-relevant torso movement and deformations from the deformations that are solely related to breathing. This makes it possible to visualize only the breathing displacements. The system is based on the structured laser triangulation principle, with simultaneous spatial and color data acquisition of the thoraco-abdominal region. Based on the tracking of the attached passive markers, the torso movement and deformation is compensated using rigid and non-rigid transformation models on the three-dimensional (3D) data. The total time of 3D data processing together with visualization equals 20 ms per cycle.In vitro verification of the rigid movement extraction was performed using the iterative closest point algorithm as a reference. Furthermore, a volumetric evaluation on a live subject was performed to establish the accuracy of the rigid and non-rigid model. The root mean square deviation between the measured and the reference volumes shows an error of  ±0.08 dm(3) for rigid movement extraction. Similarly, the error was calculated to be  ±0.02 dm(3) for torsional deformation extraction and  ±0.11 dm(3) for lateral bending deformation extraction. The results confirm that during the torso movement and deformation, the proposed method is sufficiently accurate to visualize only the displacements related to breathing. The method can be used, for example, during the breathing exercise on an indoor bicycle or a treadmill.

  17. 3D Geospatial Models for Visualization and Analysis of Groundwater Contamination at a Nuclear Materials Processing Facility

    NASA Astrophysics Data System (ADS)

    Stirewalt, G. L.; Shepherd, J. C.

    2003-12-01

    Analysis of hydrostratigraphy and uranium and nitrate contamination in groundwater at a former nuclear materials processing facility in Oklahoma were undertaken employing 3-dimensional (3D) geospatial modeling software. Models constructed played an important role in the regulatory decision process of the U.S. Nuclear Regulatory Commission (NRC) because they enabled visualization of temporal variations in contaminant concentrations and plume geometry. Three aquifer systems occur at the site, comprised of water-bearing fractured shales separated by indurated sandstone aquitards. The uppermost terrace groundwater system (TGWS) aquifer is composed of terrace and alluvial deposits and a basal shale. The shallow groundwater system (SGWS) aquifer is made up of three shale units and two sandstones. It is separated from the overlying TGWS and underlying deep groundwater system (DGWS) aquifer by sandstone aquitards. Spills of nitric acid solutions containing uranium and radioactive decay products around the main processing building (MPB), leakage from storage ponds west of the MPB, and leaching of radioactive materials from discarded equipment and waste containers contaminated both the TGWS and SGWS aquifers during facility operation between 1970 and 1993. Constructing 3D geospatial property models for analysis of groundwater contamination at the site involved use of EarthVision (EV), a 3D geospatial modeling software developed by Dynamic Graphics, Inc. of Alameda, CA. A viable 3D geohydrologic framework model was initially constructed so property data could be spatially located relative to subsurface geohydrologic units. The framework model contained three hydrostratigraphic zones equivalent to the TGWS, SGWS, and DGWS aquifers in which groundwater samples were collected, separated by two sandstone aquitards. Groundwater data collected in the three aquifer systems since 1991 indicated high concentrations of uranium (>10,000 micrograms/liter) and nitrate (> 500 milligrams

  18. 3D-printing of undisturbed soil imaged by X-ray

    NASA Astrophysics Data System (ADS)

    Bacher, Matthias; Koestel, John; Schwen, Andreas

    2014-05-01

    The unique pore structures in Soils are altered easily by water flow. Each sample has a different morphology and the results of repetitions vary as well. Soil macropores in 3D-printed durable material avoid erosion and have a known morphology. Therefore potential and limitations of reproducing an undisturbed soil sample by 3D-printing was evaluated. We scanned an undisturbed soil column of Ultuna clay soil with a diameter of 7 cm by micro X-ray computer tomography at a resolution of 51 micron. A subsample cube of 2.03 cm length with connected macropores was cut out from this 3D-image and printed in five different materials by a 3D-printing service provider. The materials were ABS, Alumide, High Detail Resin, Polyamide and Prime Grey. The five print-outs of the subsample were tested on their hydraulic conductivity by using the falling head method. The hydrophobicity was tested by an adapted sessile drop method. To determine the morphology of the print-outs and compare it to the real soil also the print-outs were scanned by X-ray. The images were analysed with the open source program ImageJ. The five 3D-image print-outs copied from the subsample of the soil column were compared by means of their macropore network connectivity, porosity, surface volume, tortuosity and skeleton. The comparison of pore morphology between the real soil and the print-outs showed that Polyamide reproduced the soil macropore structure best while Alumide print-out was the least detailed. Only the largest macropore was represented in all five print-outs. Printing residual material or printing aid material remained in and clogged the pores of all print-out materials apart from Prime Grey. Therefore infiltration was blocked in these print-outs and the materials are not suitable even though the 3D-printed pore shapes were well reproduced. All of the investigated materials were insoluble. The sessile drop method showed angles between 53 and 85 degrees. Prime Grey had the fastest flow rate; the

  19. (3+1)D superspace structural determination of two new modulated composite phases: Sr 1+ x(Cu xMn 1- x)O 3; x=3/11 and x=0.3244

    NASA Astrophysics Data System (ADS)

    El Abed, Ahmed; Gaudin, Etienne; zur Loye, Hans-Conrad; Darriet, Jacques

    2003-01-01

    We report the structure determination of two new phases belonging to the A 1+ x(A' xB 1- x)O 3 family of oxides with A=Sr, A'=Cu, and B=Mn, where x=3/11 and x=0.3244, corresponding to a commensurate and incommensurate composite structure, respectively. These two compounds are the first examples of oxides belonging to the Sr 1+ x(Cu xMn 1- x)O 3 family. Their structures were solved in the (3+1) dimensional superspace formalism as modulated composite structures with two subsystems [(Cu,Mn)O 3] and [Sr]. The superspace group used to solve the structures is R 3¯m(00γ)0s . The first phase ( x=3/11), corresponding to the chemical formula Sr 14Cu 3Mn 8O 33, was obtained as a single crystal with unit cell parameters of a=9.6025(3) Å and c1=2.5660(8) Å ( q=7/11 c1∗, Z=3), where c1 is the lattice parameter corresponding to the c-axis of the trigonal subsystem [(Cu,Mn)O 3]. The second phase ( x=0.3244(1)), is a polycrystalline sample with unit cell parameters of a=9.5933(7) and c1=2.5933(3) ( q=0.6622 c1∗, Z=3). In both structures, one dimensional chains run along the c-axis which contain octahedra and trigonal prisms occupied by manganese and copper atoms, respectively. The refinement results show that in both cases copper occupies the rectangular faces of the trigonal prism while manganese occupies the octahedral sites. The magnetic measurements of the polycrystalline phase (Sr 1+ x(Cu xMn 1- x)O 3, x=0.3244(2)) and the Curie constant obtained from the high temperature susceptibility are in agreement with a spin state configuration of S=3/2 for Mn 4+ and S=1/2 for Cu 2+.

  20. Application of the polystyrene model made by 3-D printing rapid prototyping technology for operation planning in revision lumbar discectomy.

    PubMed

    Li, Chao; Yang, Mingyuan; Xie, Yang; Chen, Ziqiang; Wang, Chuanfeng; Bai, Yushu; Zhu, Xiaodong; Li, Ming

    2015-05-01

    The objective was to evaluate the effectiveness of 3-D rapid prototyping technology in revision lumbar discectomy. 3-D rapid prototyping technology has not been reported in the treatment of revision lumbar discectomy. Patients with recurrent lumbar disc herniation who were preparing to undergo revision lumbar discectomy from a single center between January 2011 and 2013 were included in this analysis. Patients were divided into two groups. In group A, 3-D printing technology was used to create subject-specific lumbar vertebral models in the preoperative planning process. Group B underwent lumbar revision as usual. Preoperative and postoperative clinical outcomes were compared between groups included operation time, perioperative blood loss, postoperative complications, Oswestry Disability Index (ODI), Japan Orthopaedics Association (JOA) scores, and visual analogue scale (VAS) scores for back pain and leg pain. A total of 37 patients were included in this study (Group A = 15, Group B = 22). Group A had a significantly shorter operation time (106.53 ± 11.91 vs. 131.92 ± 10.81 min, P < 0.001) and significantly less blood loss (341.67 ± 49.45 vs. 466.77 ± 71.46 ml, P < 0.001). There was no difference between groups for complication rate. There were also no differences between groups for any clinical metric. Using the 3-D printing technology before revision lumbar discectomy may reduce the operation time and the perioperative blood loss. There does not appear to be a benefit to using the technology with respect to clinical outcomes. Future prospective studies are needed to further elucidate the efficacy of this emerging technology.

  1. Dual-mode intracranial catheter integrating 3D ultrasound imaging and hyperthermia for neuro-oncology: feasibility study.

    PubMed

    Herickhoff, Carl D; Light, Edward D; Bing, Kristin F; Mukundan, Srinivasan; Grant, Gerald A; Wolf, Patrick D; Smith, Stephen W

    2009-04-01

    In this study, we investigated the feasibility of an intracranial catheter transducer with dual-mode capability of real-time 3D (RT3D) imaging and ultrasound hyperthermia, for application in the visualization and treatment of tumors in the brain. Feasibility is demonstrated in two ways: first by using a 50-element linear array transducer (17 mm x 3.1 mm aperture) operating at 4.4 MHz with our Volumetrics diagnostic scanner and custom, electrical impedance-matching circuits to achieve a temperature rise over 4 degrees C in excised pork muscle, and second, by designing and constructing a 12 Fr, integrated matrix and linear-array catheter transducer prototype for combined RT3D imaging and heating capability. This dual-mode catheter incorporated 153 matrix array elements and 11 linear array elements diced on a 0.2 mm pitch, with a total aperture size of 8.4 mm x 2.3 mm. This 3.64 MHz array achieved a 3.5 degrees C in vitro temperature rise at a 2 cm focal distance in tissue-mimicking material. The dual-mode catheter prototype was compared with a Siemens 10 Fr AcuNav catheter as a gold standard in experiments assessing image quality and therapeutic potential and both probes were used in an in vivo canine brain model to image anatomical structures and color Doppler blood flow and to attempt in vivo heating.

  2. Presurgical visualization of the neurovascular relationship in trigeminal neuralgia with 3D modeling using free Slicer software.

    PubMed

    Han, Kai-Wei; Zhang, Dan-Feng; Chen, Ji-Gang; Hou, Li-Jun

    2016-11-01

    To explore whether segmentation and 3D modeling are more accurate in the preoperative detection of the neurovascular relationship (NVR) in patients with trigeminal neuralgia (TN) compared to MRI fast imaging employing steady-state acquisition (FIESTA). Segmentation and 3D modeling using 3D Slicer were conducted for 40 patients undergoing MRI FIESTA and microsurgical vascular decompression (MVD). The NVR, as well as the offending vessel determined by MRI FIESTA and 3D Slicer, was reviewed and compared with intraoperative manifestations using SPSS. The k agreement between the MRI FIESTA and operation in determining the NVR was 0.232 and that between the 3D modeling and operation was 0.6333. There was no significant difference between these two procedures (χ 2  = 8.09, P = 0.088). The k agreement between the MRI FIESTA and operation in determining the offending vessel was 0.373, and that between the 3D modeling and operation was 0.922. There were significant differences between two of them (χ 2  = 82.01, P = 0.000). The sensitivity and specificity for MRI FIESTA in determining the NVR were 87.2 % and 100 %, respectively, and for 3D modeling were both 100 %. The segmentation and 3D modeling were more accurate than MRI FIESTA in preoperative verification of the NVR and offending vessel. This was consistent with surgical manifestations and was more helpful for the preoperative decision and surgical plan.

  3. Extracting, Tracking, and Visualizing Magnetic Flux Vortices in 3D Complex-Valued Superconductor Simulation Data.

    PubMed

    Guo, Hanqi; Phillips, Carolyn L; Peterka, Tom; Karpeyev, Dmitry; Glatz, Andreas

    2016-01-01

    We propose a method for the vortex extraction and tracking of superconducting magnetic flux vortices for both structured and unstructured mesh data. In the Ginzburg-Landau theory, magnetic flux vortices are well-defined features in a complex-valued order parameter field, and their dynamics determine electromagnetic properties in type-II superconductors. Our method represents each vortex line (a 1D curve embedded in 3D space) as a connected graph extracted from the discretized field in both space and time. For a time-varying discrete dataset, our vortex extraction and tracking method is as accurate as the data discretization. We then apply 3D visualization and 2D event diagrams to the extraction and tracking results to help scientists understand vortex dynamics and macroscale superconductor behavior in greater detail than previously possible.

  4. Tunable White-Light Emission in Single-Cation-Templated Three-Layered 2D Perovskites (CH3CH2NH3)4Pb3Br10-xClx.

    PubMed

    Mao, Lingling; Wu, Yilei; Stoumpos, Constantinos C; Traore, Boubacar; Katan, Claudine; Even, Jacky; Wasielewski, Michael R; Kanatzidis, Mercouri G

    2017-08-30

    Two-dimensional (2D) hybrid halide perovskites come as a family (B) 2 (A) n-1 Pb n X 3n+1 (B and A= cations; X= halide). These perovskites are promising semiconductors for solar cells and optoelectronic applications. Among the fascinating properties of these materials is white-light emission, which has been mostly observed in single-layered 2D lead bromide or chloride systems (n = 1), where the broad emission comes from the transient photoexcited states generated by self-trapped excitons (STEs) from structural distortion. Here we report a multilayered 2D perovskite (n = 3) exhibiting a tunable white-light emission. Ethylammonium (EA + ) can stabilize the 2D perovskite structure in EA 4 Pb 3 Br 10-x Cl x (x = 0, 2, 4, 6, 8, 9.5, and 10) with EA + being both the A and B cations in this system. Because of the larger size of EA, these materials show a high distortion level in their inorganic structures, with EA 4 Pb 3 Cl 10 having a much larger distortion than that of EA 4 Pb 3 Br 10 , which results in broadband white-light emission of EA 4 Pb 3 Cl 10 in contrast to narrow blue emission of EA 4 Pb 3 Br 10 . The average lifetime of the series decreases gradually from the Cl end to the Br end, indicating that the larger distortion also prolongs the lifetime (more STE states). The band gap of EA 4 Pb 3 Br 10-x Cl x ranges from 3.45 eV (x = 10) to 2.75 eV (x = 0), following Vegard's law. First-principles density functional theory calculations (DFT) show that both EA 4 Pb 3 Cl 10 and EA 4 Pb 3 Br 10 are direct band gap semiconductors. The color rendering index (CRI) of the series improves from 66 (EA 4 Pb 3 Cl 10 ) to 83 (EA 4 Pb 3 Br 0.5 Cl 9.5 ), displaying high tunability and versatility of the title compounds.

  5. Examining the Conceptual Understandings of Geoscience Concepts of Students with Visual Impairments: Implications of 3-D Printing

    NASA Astrophysics Data System (ADS)

    Koehler, Karen E.

    The purpose of this qualitative study was to explore the use of 3-D printed models as an instructional tool in a middle school science classroom for students with visual impairments and compare their use to traditional tactile graphics for aiding conceptual understanding of geoscience concepts. Specifically, this study examined if the students' conceptual understanding of plate tectonics was different when 3-D printed objects were used versus traditional tactile graphics and explored the misconceptions held by students with visual impairments related to plate tectonics and associated geoscience concepts. Interview data was collected one week prior to instruction and one week after instruction and throughout the 3-week instructional period and additional ata sources included student journals, other student documents and audio taped instructional sessions. All students in the middle school classroom received instruction on plate tectonics using the same inquiry-based curriculum but during different time periods of the day. One group of students, the 3D group, had access to 3-D printed models illustrating specific geoscience concepts and the group of students, the TG group, had access to tactile graphics illustrating the same geoscience concepts. The videotaped pre and post interviews were transcribed, analyzed and coded for conceptual understanding using constant comparative analysis and to uncover student misconceptions. All student responses to the interview questions were categorized in terms of conceptual understanding. Analysis of student journals and classroom talk served to uncover student mental models and misconceptions about plate tectonics and associated geoscience concepts to measure conceptual understanding. A slight majority of the conceptual understanding before instruction was categorized as no understanding or alternative understanding and after instruction the larger majority of conceptual understanding was categorized as scientific or scientific

  6. A 3D visualization and simulation of the individual human jaw.

    PubMed

    Muftić, Osman; Keros, Jadranka; Baksa, Sarajko; Carek, Vlado; Matković, Ivo

    2003-01-01

    A new biomechanical three-dimensional (3D) model for the human mandible based on computer-generated virtual model is proposed. Using maps obtained from the special kinds of photos of the face of the real subject, it is possible to attribute personality to the virtual character, while computer animation offers movements and characteristics within the confines of space and time of the virtual world. A simple two-dimensional model of the jaw cannot explain the biomechanics, where the muscular forces through occlusion and condylar surfaces are in the state of 3D equilibrium. In the model all forces are resolved into components according to a selected coordinate system. The muscular forces act on the jaw, along with the necessary force level for chewing as some kind of mandible balance, preventing dislocation and loading of nonarticular tissues. In the work is used new approach to computer-generated animation of virtual 3D characters (called "Body SABA"), using in one object package of minimal costs and easy for operation.

  7. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta.

    PubMed

    Huetteroth, Wolf; El Jundi, Basil; El Jundi, Sirri; Schachtner, Joachim

    2010-01-01

    DURING METAMORPHOSIS, THE TRANSITION FROM THE LARVA TO THE ADULT, THE INSECT BRAIN UNDERGOES CONSIDERABLE REMODELING: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1.

  8. A package for 3-D unstructured grid generation, finite-element flow solution and flow field visualization

    NASA Technical Reports Server (NTRS)

    Parikh, Paresh; Pirzadeh, Shahyar; Loehner, Rainald

    1990-01-01

    A set of computer programs for 3-D unstructured grid generation, fluid flow calculations, and flow field visualization was developed. The grid generation program, called VGRID3D, generates grids over complex configurations using the advancing front method. In this method, the point and element generation is accomplished simultaneously, VPLOT3D is an interactive, menudriven pre- and post-processor graphics program for interpolation and display of unstructured grid data. The flow solver, VFLOW3D, is an Euler equation solver based on an explicit, two-step, Taylor-Galerkin algorithm which uses the Flux Corrected Transport (FCT) concept for a wriggle-free solution. Using these programs, increasingly complex 3-D configurations of interest to aerospace community were gridded including a complete Space Transportation System comprised of the space-shuttle orbitor, the solid-rocket boosters, and the external tank. Flow solutions were obtained on various configurations in subsonic, transonic, and supersonic flow regimes.

  9. Laser gain on 3p-3d and 3s-3p transitions and X-ray line ratios for the nitrogen isoelectronic sequence

    NASA Technical Reports Server (NTRS)

    Feldman, U.; Seely, J. F.; Bhatia, A. K.

    1989-01-01

    Results are presented on calculations of the 72 levels belonging to the 2s(2)2p(3), 2s2p(4), 2p(5), 2s(2)2p(2)3s, 2s(2)2p(2)3p, and 2s(2)2p(2)3d configurations of the N I isoelectronic sequence for the ions Ar XII, Ti XVI, Fe XX, Zn XXIV, and Kr XXX, for electron densities up to 10 to the 24th/cu cm. It was found that large population inversions and gain occur between levels in the 2s(2)2p(2)3p configuration and levels in the 2s(2)2p(2)3d configuration that cannot decay to the ground configuration by an electric dipole transition. For increasing electron densities, the intensities of the X-ray transitions from the 2s(2)2p(2)3p configuration to the ground configuration decrease relative to the transitions from the 2s(2)2p(2)3s and 2s(2)2p(2)3d configurations to the ground configuration. The density dependence of these X-ray line ratios is presented.

  10. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  11. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  12. [An alternative to the usual operating microscope and loupe magnification for free microvascular tissue transfer. Varioscope AF3-A].

    PubMed

    Chiummariello, S; Alfano, C; Fioramonti, P; Scuderi, N

    2005-12-01

    Free microvascular tissue transfers have become today a key instrument for the surgical treatment of wide loss of tissue, but their employment implies mandatory use of the right visual magnification means. Until now these instruments were mainly loupes and operating microscopes. Our study is focusing on the use of a new visual system--Varioscope AF3-A--in the reconstructive microsurgery field. Varioscope AF3-A (Life Optics, Vienna, Austria) has been employed in our Institute in 10 microvascular reconstructions, where different free flaps were used in head and neck reconstruction. All the flaps took and only one developed a partial necrosis. We have also noticed, by using this new instrument, a learning curve with a progressive contraction of the operating time. In all cases we have operated on 2 mm caliber vessels or more and on tissues that didn't previously undergo radiation therapy. The employment of a visual magnification mean, as Varioscope AF3-A, allows autofocus (from 3.6X to 7.2X) and a wide vision. It can be easily used with substantial advantages for the surgeon in performing microvascular anastomosis. Partial drawbacks are the equipment high cost and weight, compared to the loupes and a stronger ocular stress due to the continuous autofocus compared to the static operating microscopes.

  13. Structure, magnetism and electronic properties in 3d-5d based double perovskite ({Sr_{1-x}} Y x )2FeIrO6

    NASA Astrophysics Data System (ADS)

    Kharkwal, K. C.; Pramanik, A. K.

    2017-12-01

    The 3d-5d based double perovskites are of current interest as they provide model systems to study the interplay between electronic correlation (U) and spin-orbit coupling (SOC). Here, we report detailed structural, magnetic and transport properties of doped double perovskite material (Sr1-x Y x )2FeIrO6 with x ≤slant 0.2 . With substitution of Y, the system retains its original crystal structure but structural parameters change with x in nonmonotonic fashion. The magnetization data for Sr2FeIrO6 show antiferromagnetic type magnetic transition around 45 K however, a close inspection of the data indicates a weak magnetic phase transition around 120 K. No change of structural symmetry has been observed down to low temperature, although the lattice parameters show sudden changes around the magnetic transitions. Sr2FeIrO6 shows an insulating behavior over the whole temperature range, which nevertheless does not change with Y substitution. The nature of charge conduction is found to follow thermally activated Mott’s variable range hopping and power law behavior for parent and doped samples, respectively. Interestingly, evolution of structural, magnetic and transport behavior in (Sr1-x Y x )2FeIrO6 is observed to reverse with x > 0.1 , which is believed to arise due to a change in the transition metal ionic state.

  14. Phase Tomography Reconstructed by 3D TIE in Hard X-ray Microscope

    NASA Astrophysics Data System (ADS)

    Yin, Gung-Chian; Chen, Fu-Rong; Pyun, Ahram; Je, Jung Ho; Hwu, Yeukuang; Liang, Keng S.

    2007-01-01

    X-ray phase tomography and phase imaging are promising ways of investigation on low Z material. A polymer blend of PE/PS sample was used to test the 3D phase retrieval method in the parallel beam illuminated microscope. Because the polymer sample is thick, the phase retardation is quite mixed and the image can not be distinguished when the 2D transport intensity equation (TIE) is applied. In this study, we have provided a different approach for solving the phase in three dimensions for thick sample. Our method involves integration of 3D TIE/Fourier slice theorem for solving thick phase sample. In our experiment, eight sets of de-focal series image data sets were recorded covering the angular range of 0 to 180 degree. Only three set of image cubes were used in 3D TIE equation for solving the phase tomography. The phase contrast of the polymer blend in 3D is obviously enhanced, and the two different groups of polymer blend can be distinguished in the phase tomography.

  15. Image processing and 3D visualization in the interpretation of patterned injury of the skin

    NASA Astrophysics Data System (ADS)

    Oliver, William R.; Altschuler, Bruce R.

    1995-09-01

    The use of image processing is becoming increasingly important in the evaluation of violent crime. While much work has been done in the use of these techniques for forensic purposes outside of forensic pathology, its use in the pathologic examination of wounding has been limited. We are investigating the use of image processing in the analysis of patterned injuries and tissue damage. Our interests are currently concentrated on 1) the use of image processing techniques to aid the investigator in observing and evaluating patterned injuries in photographs, 2) measurement of the 3D shape characteristics of surface lesions, and 3) correlation of patterned injuries with deep tissue injury as a problem in 3D visualization. We are beginning investigations in data-acquisition problems for performing 3D scene reconstructions from the pathology perspective of correlating tissue injury to scene features and trace evidence localization. Our primary tool for correlation of surface injuries with deep tissue injuries has been the comparison of processed surface injury photographs with 3D reconstructions from antemortem CT and MRI data. We have developed a prototype robot for the acquisition of 3D wound and scene data.

  16. Structure and Dynamics of Current Sheets in 3D Magnetic Fields with the X-line

    NASA Astrophysics Data System (ADS)

    Frank, Anna G.; Bogdanov, S. Yu.; Bugrov, S. G.; Markov, V. S.; Dreiden, G. V.; Ostrovskaya, G. V.

    2004-11-01

    Experimental results are presented on the structure of current sheets formed in 3D magnetic fields with singular lines of the X-type. Two basic diagnostics were used with the device CS - 3D: two-exposure holographic interferometry and magnetic measurements. Formation of extended current sheets and plasma compression were observed in the presence of the longitudinal magnetic field component aligned with the X-line. Plasma density decreased and the sheet thickness increased with an increase of the longitudinal component. We succeeded to reveal formation of the sheets taking unusual shape, namely tilted and asymmetric sheets, in plasmas with the heavy ions. These current sheets were obviously different from the planar sheets formed in 2D magnetic fields, i.e. without longitudinal component. Analysis of typical plasma parameters made it evident that plasma dynamics and current sheet evolution should be treated on the base of the two-fluid approach. Specifically it is necessary to take into account the Hall currents in the plane perpendicular to the X-line, and the dynamic effects resulting from interaction of the Hall currents and the 3D magnetic field. Supported by RFBR, grant 03-02-17282, and ISTC, project 2098.

  17. Web-based three-dimensional geo-referenced visualization

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Gong, Jianhua; Wang, Freeman

    1999-12-01

    This paper addresses several approaches to implementing web-based, three-dimensional (3-D), geo-referenced visualization. The discussion focuses on the relationship between multi-dimensional data sets and applications, as well as the thick/thin client and heavy/light server structure. Two models of data sets are addressed in this paper. One is the use of traditional 3-D data format such as 3-D Studio Max, Open Inventor 2.0, Vis5D and OBJ. The other is modelled by a web-based language such as VRML. Also, traditional languages such as C and C++, as well as web-based programming tools such as Java, Java3D and ActiveX, can be used for developing applications. The strengths and weaknesses of each approach are elaborated. Four practical solutions for using VRML and Java, Java and Java3D, VRML and ActiveX and Java wrapper classes (Java and C/C++), to develop applications are presented for web-based, real-time interactive and explorative visualization.

  18. In situ 3-D mapping of pore structures and hollow grains of interplanetary dust particles with phase contrast X-ray nanotomography

    NASA Astrophysics Data System (ADS)

    Hu, Z. W.; Winarski, R. P.

    2016-09-01

    Unlocking the 3-D structure and properties of intact chondritic porous interplanetary dust particles (IDPs) in nanoscale detail is challenging, which is also complicated by atmospheric entry heating, but is important for advancing our understanding of the formation and origins of IDPs and planetary bodies as well as dust and ice agglomeration in the outer protoplanetary disk. Here, we show that indigenous pores, pristine grains, and thermal alteration products throughout intact particles can be noninvasively visualized and distinguished morphologically and microstructurally in 3-D detail down to ~10 nm by exploiting phase contrast X-ray nanotomography. We have uncovered the surprisingly intricate, submicron, and nanoscale pore structures of a ~10-μm-long porous IDP, consisting of two types of voids that are interconnected in 3-D space. One is morphologically primitive and mostly submicron-sized intergranular voids that are ubiquitous; the other is morphologically advanced and well-defined intragranular nanoholes that run through the approximate centers of ~0.3 μm or lower submicron hollow grains. The distinct hollow grains exhibit complex 3-D morphologies but in 2-D projections resemble typical organic hollow globules observed by transmission electron microscopy. The particle, with its outer region characterized by rough vesicular structures due to thermal alteration, has turned out to be an inherently fragile and intricately submicron- and nanoporous aggregate of the sub-μm grains or grain clumps that are delicately bound together frequently with little grain-to-grain contact in 3-D space.

  19. A Hybrid Synthetic Vision System for the Tele-operation of Unmanned Vehicles

    NASA Technical Reports Server (NTRS)

    Delgado, Frank; Abernathy, Mike

    2004-01-01

    A system called SmartCam3D (SC3D) has been developed to provide enhanced situational awareness for operators of a remotely piloted vehicle. SC3D is a Hybrid Synthetic Vision System (HSVS) that combines live sensor data with information from a Synthetic Vision System (SVS). By combining the dual information sources, the operators are afforded the advantages of each approach. The live sensor system provides real-time information for the region of interest. The SVS provides information rich visuals that will function under all weather and visibility conditions. Additionally, the combination of technologies allows the system to circumvent some of the limitations from each approach. Video sensor systems are not very useful when visibility conditions are hampered by rain, snow, sand, fog, and smoke, while a SVS can suffer from data freshness problems. Typically, an aircraft or satellite flying overhead collects the data used to create the SVS visuals. The SVS data could have been collected weeks, months, or even years ago. To that extent, the information from an SVS visual could be outdated and possibly inaccurate. SC3D was used in the remote cockpit during flight tests of the X-38 132 and 131R vehicles at the NASA Dryden Flight Research Center. SC3D was also used during the operation of military Unmanned Aerial Vehicles. This presentation will provide an overview of the system, the evolution of the system, the results of flight tests, and future plans. Furthermore, the safety benefits of the SC3D over traditional and pure synthetic vision systems will be discussed.

  20. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    PubMed

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  1. Isolation, electron microscopic imaging, and 3-D visualization of native cardiac thin myofilaments.

    PubMed

    Spiess, M; Steinmetz, M O; Mandinova, A; Wolpensinger, B; Aebi, U; Atar, D

    1999-06-15

    An increasing number of cardiac diseases are currently pinpointed to reside at the level of the thin myofilaments (e.g., cardiomyopathies, reperfusion injury). Hence the aim of our study was to develop a new method for the isolation of mammalian thin myofilaments suitable for subsequent high-resolution electron microscopic imaging. Native cardiac thin myofilaments were extracted from glycerinated porcine myocardial tissue in the presence of protease inhibitors. Separation of thick and thin myofilaments was achieved by addition of ATP and several centrifugation steps. Negative staining and subsequent conventional and scanning transmission electron microscopy (STEM) of thin myofilaments permitted visualization of molecular details; unlike conventional preparations of thin myofilaments, our method reveals the F-actin moiety and allows direct recognition of thin myofilament-associated porcine cardiac troponin complexes. They appear as "bulges" at regular intervals of approximately 36 nm along the actin filaments. Protein analysis using SDS-polyacrylamide gel electrophoresis revealed that only approximately 20% troponin I was lost during the isolation procedure. In a further step, 3-D helical reconstructions were calculated using STEM dark-field images. These 3-D reconstructions will allow further characterization of molecular details, and they will be useful for directly visualizing molecular alterations related to diseased cardiac thin myofilaments (e.g., reperfusion injury, alterations of Ca2+-mediated tropomyosin switch). Copyright 1999 Academic Press.

  2. X-43D Conceptual Design and Feasibility Study

    NASA Technical Reports Server (NTRS)

    Johnson, Donald B.; Robinson, Jeffrey S.

    2005-01-01

    NASA s Next Generation Launch Technology (NGLT) Program, in conjunction with the office of the Director of Defense Research and Engineering (DDR&E), developed an integrated hypersonic technology demonstration roadmap. This roadmap is an integral part of the National Aerospace Initiative (NAI), a multi-year, multi-agency cooperative effort to invest in and develop, among other things, hypersonic technologies. This roadmap contains key ground and flight demonstrations required along the path to developing a reusable hypersonic space access system. One of the key flight demonstrations required for systems that will operate in the high Mach number regime is the X-43D. As currently conceived, the X-43D is a Mach 15 flight test vehicle that incorporates a hydrogen-fueled scramjet engine. The purpose of the X-43D is to gather high Mach number flight environment and engine operability information which is difficult, if not impossible, to gather on the ground. During 2003, the NGLT Future Hypersonic Flight Demonstration Office initiated a feasibility study on the X-43D. The objective of the study was to develop a baseline conceptual design, assess its performance, and identify the key technical issues. The study also produced a baseline program plan, schedule, and cost, along with a list of key programmatic risks.

  3. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Electronic structure of charge- and spin-controlled Sr(1-(x+y))La(x+y)Ti(1-x)Cr(x)O3.

    PubMed

    Iwasawa, H; Yamakawa, K; Saitoh, T; Inaba, J; Katsufuji, T; Higashiguchi, M; Shimada, K; Namatame, H; Taniguchi, M

    2006-02-17

    We present the electronic structure of Sr(1-(x+y))La(x+y)Ti(1-x)Cr(x)O3 investigated by high-resolution photoemission spectroscopy. In the vicinity of the Fermi level, it was found that the electronic structure was composed of a Cr 3d local state with the t(2g)3 configuration and a Ti 3d itinerant state. The energy levels of these Cr and Ti 3d states are well interpreted by the difference of the charge-transfer energy of both ions. The spectral weight of the Cr 3d state is completely proportional to the spin concentration x irrespective of the carrier concentration y, indicating that the spin density can be controlled by x as desired. In contrast, the spectral weight of the Ti 3d state is not proportional to y, depending on the amount of Cr doping.

  5. Interaction Between ACE I/D and ACTN3 R557X Polymorphisms in Polish Competitive Swimmers

    PubMed Central

    Grenda, Agata; Leońska-Duniec, Agata; Kaczmarczyk, Mariusz; Ficek, Krzysztof; Król, Paweł; Cięszczyk, Paweł; Żmijewski, Piotr

    2014-01-01

    We hypothesized that the ACE ID / ACTN3 R577X genotype combination was associated with sprint and endurance performance. Therefore, the purpose of the present study was to determine the interaction between both ACE ID and ACTN3 R577X polymorphisms and sprint and endurance performance in swimmers. Genomic DNA was extracted from oral epithelial cells using GenElute Mammalian Genomic DNA Miniprep Kit (Sigma, Germany). All samples were genotyped using a real-time poly- merase chain reaction. The ACE I/D and the ACTN3 R577X genotype frequencies met Hardy-Weinberg expectations in both swimmers and controls. When the two swimmer groups, long distance swimmers (LDS) and short distance swimmers (SDS), were compared with control subjects in a single test, a significant association was found only for the ACE polymorphism, but not for ACTN3. Additionally, four ACE/ACTN3 combined genotypes (ID/RX, ID/XX, II/RX and II/XX) were statistically significant for the LDS versus Control comparison, but none for the SDS versus Control comparison. The ACE I/D and the ACTN3 R577X polymorphisms did not show any association with sprint swimming, taken individually or in combination. In spite of numerous previous reports of associations with athletic status or sprint performance in other sports, the ACTN3 R577X polymorphism, in contrast to ACE I/D, was not significantly associated with elite swimming status when considered individually. However, the combined analysis of the two loci suggests that the co-occurrence of the ACE I and ACTN3 X alleles may be beneficial to swimmers who compete in long distance races. PMID:25414746

  6. 3D reconstruction and spatial auralization of the "Painted Dolmen" of Antelas

    NASA Astrophysics Data System (ADS)

    Dias, Paulo; Campos, Guilherme; Santos, Vítor; Casaleiro, Ricardo; Seco, Ricardo; Sousa Santos, Beatriz

    2008-02-01

    This paper presents preliminary results on the development of a 3D audiovisual model of the Anta Pintada (painted dolmen) of Antelas, a Neolithic chamber tomb located in Oliveira de Frades and listed as Portuguese national monument. The final aim of the project is to create a highly accurate Virtual Reality (VR) model of this unique archaeological site, capable of providing not only visual but also acoustic immersion based on its actual geometry and physical properties. The project started in May 2006 with in situ data acquisition. The 3D geometry of the chamber was captured using a Laser Range Finder. In order to combine the different scans into a complete 3D visual model, reconstruction software based on the Iterative Closest Point (ICP) algorithm was developed using the Visualization Toolkit (VTK). This software computes the boundaries of the room on a 3D uniform grid and populates its interior with "free-space nodes", through an iterative algorithm operating like a torchlight illuminating a dark room. The envelope of the resulting set of "free-space nodes" is used to generate a 3D iso-surface approximating the interior shape of the chamber. Each polygon of this surface is then assigned the acoustic absorption coefficient of the corresponding boundary material. A 3D audiovisual model operating in real-time was developed for a VR Environment comprising head-mounted display (HMD) I-glasses SVGAPro, an orientation sensor (tracker) InterTrax 2 with 3 Degrees Of Freedom (3DOF) and stereo headphones. The auralisation software is based on a geometric model. This constitutes a first approach, since geometric acoustics have well-known limitations in rooms with irregular surfaces. The immediate advantage lies in their inherent computational efficiency, which allows real-time operation. The program computes the early reflections forming the initial part of the chamber's impulse response (IR), which carry the most significant cues for source localisation. These early

  7. Micron-Resolution X-ray Structural Microscopy Studies of 3-D Grain Growth in Polycrystalline Aluminum

    NASA Astrophysics Data System (ADS)

    Budai, J. D.; Yang, W.; Tischler, J. Z.; Liu, W.; Larson, B. C.; Ice, G. E.

    2004-03-01

    We describe a new polychromatic x-ray microdiffraction technique providing 3D measurements of lattice structure, orientation and strain with submicron point-to-point spatial resolution. The instrument is located on the UNI-CAT II undulator beamline at the Advanced Photon Source and uses Kirkpatrick-Baez focusing mirrors, differential aperture CCD measurements and automated analysis of spatially-resolved Laue patterns. 3D x-ray structural microscopy is applicable to a wide range of materials investigations and here we describe 3D thermal grain growth studies in polycrystalline aluminum ( ˜1% Fe,Si) from Alcoa. The morphology and orientations of the grains in a hot-rolled aluminum sample were initially mapped. The sample was then annealed to induce grain growth, cooled to room temperature, and the same volume region was re-mapped to determine the thermal migration of all grain boundaries. Significant grain growth was observed after annealing above ˜350^oC where both low-angle and high-angle boundaries were mobile. These measurements will provide the detailed 3D experimental input needed for testing theories and computer models of 3D grain growth in bulk materials.

  8. Rotational X-ray angiography: a method for intra-operative volume imaging of the left-atrium and pulmonary veins for atrial fibrillation ablation guidance

    NASA Astrophysics Data System (ADS)

    Manzke, R.; Zagorchev, L.; d'Avila, A.; Thiagalingam, A.; Reddy, V. Y.; Chan, R. C.

    2007-03-01

    Catheter-based ablation in the left atrium and pulmonary veins (LAPV) for treatment of atrial fibrillation in cardiac electrophysiology (EP) are complex and require knowledge of heart chamber anatomy. Electroanatomical mapping (EAM) is typically used to define cardiac structures by combining electromagnetic spatial catheter localization with surface models which interpolate the anatomy between EAM point locations in 3D. Recently, the incorporation of pre-operative volumetric CT or MR data sets has allowed for more detailed maps of LAPV anatomy to be used intra-operatively. Preoperative data sets are however a rough guide since they can be acquired several days to weeks prior to EP intervention. Due to positional and physiological changes, the intra-operative cardiac anatomy can be different from that depicted in the pre-operative data. We present an application of contrast-enhanced rotational X-ray imaging for CT-like reconstruction of 3D LAPV anatomy during the intervention itself. Depending on the heart size a single or two selective contrastenhanced rotational acquisitions are performed and CT-like volumes are reconstructed with 3D filtered back projection. In case of dual injection, the two volumes depicting the left and right portions of the LAPV are registered and fused. The data sets are visualized and segmented intra-procedurally to provide anatomical data and surface models for intervention guidance. Our results from animal and human experiments indicate that the anatomical information from intra-operative CT-like reconstructions compares favorably with preacquired imaging data and can be of sufficient quality for intra-operative guidance.

  9. 3D T2-weighted and Gd-EOB-DTPA-enhanced 3D T1-weighted MR cholangiography for evaluation of biliary anatomy in living liver donors.

    PubMed

    Cai, Larry; Yeh, Benjamin M; Westphalen, Antonio C; Roberts, John; Wang, Zhen J

    2017-03-01

    To investigate whether the addition of gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced 3D T1-weighted MR cholangiography (T1w-MRC) to 3D T2-weighted MRC (T2w-MRC) improves the confidence and diagnostic accuracy of biliary anatomy in living liver donors. Two abdominal radiologists retrospectively and independently reviewed pre-operative MR studies in 58 consecutive living liver donors. The second-order bile duct visualization on T1w- and T2w-MRC images was rated on a 4-point scale. The readers also independently recorded the biliary anatomy and their diagnostic confidence using (1) combined T1w- and T2w-MRC, and (2) T2w-MRC. In the 23 right lobe donors, the biliary anatomy at imaging and the imaging-predicted number of duct orifices at surgery were compared to intra-operative findings. T1w-MRC had a higher proportion of excellent visualization than T2w-MRC, 66% vs. 45% for reader 1 and 60% vs. 31% for reader 2. The median confidence score for biliary anatomy diagnosis was significantly higher with combined T1w- and T2w-MRC than T2w-MRC alone for both readers (Reader 1: 3 vs. 2, p < 0.001; Reader 2: 3 vs. 1, p < 0.001). Compared to intra-operative findings, the accuracy of imaging-predicted number of duct orifices using combined T1w-and T2w-MRC was significantly higher than that using T2w-MRC alone (p = 0.034 for reader 1, p = 0.0082 for reader 2). The addition of Gd-EOB-DTPA-enhanced 3D T1w-MRC to 3D T2w-MRC improves second-order bile duct visualization and increases the confidence in biliary anatomy diagnosis and the accuracy in the imaging-predicted number of duct orifices acquired during right lobe harvesting.

  10. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  11. Tomo3D 2.0--exploitation of advanced vector extensions (AVX) for 3D reconstruction.

    PubMed

    Agulleiro, Jose-Ignacio; Fernandez, Jose-Jesus

    2015-02-01

    Tomo3D is a program for fast tomographic reconstruction on multicore computers. Its high speed stems from code optimization, vectorization with Streaming SIMD Extensions (SSE), multithreading and optimization of disk access. Recently, Advanced Vector eXtensions (AVX) have been introduced in the x86 processor architecture. Compared to SSE, AVX double the number of simultaneous operations, thus pointing to a potential twofold gain in speed. However, in practice, achieving this potential is extremely difficult. Here, we provide a technical description and an assessment of the optimizations included in Tomo3D to take advantage of AVX instructions. Tomo3D 2.0 allows huge reconstructions to be calculated in standard computers in a matter of minutes. Thus, it will be a valuable tool for electron tomography studies with increasing resolution needs. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. 3D visualization of two-phase flow in the micro-tube by a simple but effective method

    NASA Astrophysics Data System (ADS)

    Fu, X.; Zhang, P.; Hu, H.; Huang, C. J.; Huang, Y.; Wang, R. Z.

    2009-08-01

    The present study provides a simple but effective method for 3D visualization of the two-phase flow in the micro-tube. An isosceles right-angle prism combined with a mirror located 45° bevel to the prism is employed to synchronously obtain the front and side views of the flow patterns with a single camera, where the locations of the prism and the micro-tube for clear imaging should satisfy a fixed relationship which is specified in the present study. The optical design is proven successfully by the tough visualization work at the cryogenic temperature range. The image deformation due to the refraction and geometrical configuration of the test section is quantitatively investigated. It is calculated that the image is enlarged by about 20% in inner diameter compared to the real object, which is validated by the experimental results. Meanwhile, the image deformation by adding a rectangular optical correction box outside the circular tube is comparatively investigated. It is calculated that the image is reduced by about 20% in inner diameter with a rectangular optical correction box compared to the real object. The 3D re-construction process based on the two views is conducted through three steps, which shows that the 3D visualization method can easily be applied for two-phase flow research in micro-scale channels and improves the measurement accuracy of some important parameters of the two-phase flow such as void fraction, spatial distribution of bubbles, etc.

  13. 3D X-Ray Nanotomography of Cells Grown on Electrospun Scaffolds.

    PubMed

    Bradley, Robert S; Robinson, Ian K; Yusuf, Mohammed

    2017-02-01

    Here, it is demonstrated that X-ray nanotomography with Zernike phase contrast can be used for 3D imaging of cells grown on electrospun polymer scaffolds. The scaffold fibers and cells are simultaneously imaged, enabling the influence of scaffold architecture on cell location and morphology to be studied. The high resolution enables subcellular details to be revealed. The X-ray imaging conditions were optimized to reduce scan times, making it feasible to scan multiple regions of interest in relatively large samples. An image processing procedure is presented which enables scaffold characteristics and cell location to be quantified. The procedure is demonstrated by comparing the ingrowth of cells after culture for 3 and 6 days. © 2016 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Research on optimal DEM cell size for 3D visualization of loess terraces

    NASA Astrophysics Data System (ADS)

    Zhao, Weidong; Tang, Guo'an; Ji, Bin; Ma, Lei

    2009-10-01

    In order to represent the complex artificial terrains like loess terraces in Shanxi Province in northwest China, a new 3D visual method namely Terraces Elevation Incremental Visual Method (TEIVM) is put forth by the authors. 406 elevation points and 14 enclosed constrained lines are sampled according to the TIN-based Sampling Method (TSM) and DEM Elevation Points and Lines Classification (DEPLC). The elevation points and constrained lines are used to construct Constrained Delaunay Triangulated Irregular Networks (CD-TINs) of the loess terraces. In order to visualize the loess terraces well by use of optimal combination of cell size and Elevation Increment Value (EIV), the CD-TINs is converted to Grid-based DEM (G-DEM) by use of different combination of cell size and EIV with linear interpolating method called Bilinear Interpolation Method (BIM). Our case study shows that the new visual method can visualize the loess terraces steps very well when the combination of cell size and EIV is reasonable. The optimal combination is that the cell size is 1 m and the EIV is 6 m. Results of case study also show that the cell size should be at least smaller than half of both the terraces average width and the average vertical offset of terraces steps for representing the planar shapes of the terraces surfaces and steps well, while the EIV also should be larger than 4.6 times of the terraces average height. The TEIVM and results above is of great significance to the highly refined visualization of artificial terrains like loess terraces.

  15. Discussion on the 3D visualizing of 1:200 000 geological map

    NASA Astrophysics Data System (ADS)

    Wang, Xiaopeng

    2018-01-01

    Using United States National Aeronautics and Space Administration Shuttle Radar Topography Mission (SRTM) terrain data as digital elevation model (DEM), overlap scanned 1:200 000 scale geological map, program using Direct 3D of Microsoft with C# computer language, the author realized the three-dimensional visualization of the standard division geological map. User can inspect the regional geology content with arbitrary angle, rotating, roaming, and can examining the strata synthetical histogram, map section and legend at any moment. This will provide an intuitionistic analyzing tool for the geological practitioner to do structural analysis with the assistant of landform, dispose field exploration route etc.

  16. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  17. Color-Space-Based Visual-MIMO for V2X Communication.

    PubMed

    Kim, Jai-Eun; Kim, Ji-Won; Park, Youngil; Kim, Ki-Doo

    2016-04-23

    In this paper, we analyze the applicability of color-space-based, color-independent visual-MIMO for V2X. We aim to achieve a visual-MIMO scheme that can maintain the original color and brightness while performing seamless communication. We consider two scenarios of GCM based visual-MIMO for V2X. One is a multipath transmission using visual-MIMO networking and the other is multi-node V2X communication. In the scenario of multipath transmission, we analyze the channel capacity numerically and we illustrate the significance of networking information such as distance, reference color (symbol), and multiplexing-diversity mode transitions. In addition, in the V2X scenario of multiple access, we may achieve the simultaneous multiple access communication without node interferences by dividing the communication area using image processing. Finally, through numerical simulation, we show the superior SER performance of the visual-MIMO scheme compared with LED-PD communication and show the numerical result of the GCM based visual-MIMO channel capacity versus distance.

  18. 3D-Reconstructions and Virtual 4D-Visualization to Study Metamorphic Brain Development in the Sphinx Moth Manduca Sexta

    PubMed Central

    Huetteroth, Wolf; el Jundi, Basil; el Jundi, Sirri; Schachtner, Joachim

    2009-01-01

    During metamorphosis, the transition from the larva to the adult, the insect brain undergoes considerable remodeling: new neurons are integrated while larval neurons are remodeled or eliminated. One well acknowledged model to study metamorphic brain development is the sphinx moth Manduca sexta. To further understand mechanisms involved in the metamorphic transition of the brain we generated a 3D standard brain based on selected brain areas of adult females and 3D reconstructed the same areas during defined stages of pupal development. Selected brain areas include for example mushroom bodies, central complex, antennal- and optic lobes. With this approach we eventually want to quantify developmental changes in neuropilar architecture, but also quantify changes in the neuronal complement and monitor the development of selected neuronal populations. Furthermore, we used a modeling software (Cinema 4D) to create a virtual 4D brain, morphing through its developmental stages. Thus the didactical advantages of 3D visualization are expanded to better comprehend complex processes of neuropil formation and remodeling during development. To obtain datasets of the M. sexta brain areas, we stained whole brains with an antiserum against the synaptic vesicle protein synapsin. Such labeled brains were then scanned with a confocal laser scanning microscope and selected neuropils were reconstructed with the 3D software AMIRA 4.1. PMID:20339481

  19. 3D visualization and quantification of bone and teeth mineralization for the study of osteo/dentinogenesis in mice models

    NASA Astrophysics Data System (ADS)

    Marchadier, A.; Vidal, C.; Ordureau, S.; Lédée, R.; Léger, C.; Young, M.; Goldberg, M.

    2011-03-01

    Research on bone and teeth mineralization in animal models is critical for understanding human pathologies. Genetically modified mice represent highly valuable models for the study of osteo/dentinogenesis defects and osteoporosis. Current investigations on mice dental and skeletal phenotype use destructive and time consuming methods such as histology and scanning microscopy. Micro-CT imaging is quicker and provides high resolution qualitative phenotypic description. However reliable quantification of mineralization processes in mouse bone and teeth are still lacking. We have established novel CT imaging-based software for accurate qualitative and quantitative analysis of mouse mandibular bone and molars. Data were obtained from mandibles of mice lacking the Fibromodulin gene which is involved in mineralization processes. Mandibles were imaged with a micro-CT originally devoted to industrial applications (Viscom, X8060 NDT). 3D advanced visualization was performed using the VoxBox software (UsefulProgress) with ray casting algorithms. Comparison between control and defective mice mandibles was made by applying the same transfer function for each 3D data, thus allowing to detect shape, colour and density discrepencies. The 2D images of transverse slices of mandible and teeth were similar and even more accurate than those obtained with scanning electron microscopy. Image processing of the molars allowed the 3D reconstruction of the pulp chamber, providing a unique tool for the quantitative evaluation of dentinogenesis. This new method is highly powerful for the study of oro-facial mineralizations defects in mice models, complementary and even competitive to current histological and scanning microscopy appoaches.

  20. 3D/4D analyses of damage and fracture behaviours in structural materials via synchrotron X-ray tomography.

    PubMed

    Toda, Hiroyuki

    2014-11-01

    X-ray microtomography has been utilized for the in-situ observation of various structural metals under external loading. Recent advances in X-ray microtomography provide remarkable tools to image the interior of materials. In-situ X-ray microtomography provides a unique possibility to access the 3D character of internal microstructure and its time evolution behaviours non-destructively, thereby enabling advanced techniques for measuring local strain distribution. Local strain mapping is readily enabled by processing such high-resolution tomographic images either by the particle tracking technique or the digital image correlation technique [1]. Procedures for tracking microstructural features which have been developed by the authors [2], have been applied to analyse localised deformation and damage evolution in a material [3]. Typically several tens of thousands of microstructural features, such as particles and pores, are tracked in a tomographic specimen (0.2 - 0.3 mm(3) in volume). When a sufficient number of microstructural features is dispersed in 3D space, the Delaunay tessellation algorithm is used to obtain local strain distribution. With these techniques, 3D strain fields can be measured with reasonable accuracy. Even local crack driving forces, such as local variations in the stress intensity factor, crack tip opening displacement and J integral along a crack front line, can be measured from discrete crack tip displacement fields [4]. In the present presentation, complicated crack initiation and growth behaviour and the extensive formation of micro cracks ahead of a crack tip are introduced as examples.A novel experimental method has recently been developed by amalgamating a pencil beam X-Ray diffraction (XRD) technique with the microstructural tracking technique [5]. The technique provides information about individual grain orientations and 1-micron-level grain morphologies in 3D together with high-density local strain mapping. The application of this

  1. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  2. Visual Function in Carriers of X-linked Retinitis Pigmentosa

    PubMed Central

    Comander, Jason; Weigel-DiFranco, Carol; Sandberg, Michael A.; Berson, Eliot L.

    2015-01-01

    Purpose To determine the frequency and severity of visual function loss in female carriers of X-linked retinitis pigmentosa (XLRP). Design Case series. Participants XLRP carriers with cross-sectional data (n = 242) and longitudinal data (n = 34, median follow-up: 16 years, follow-up range: 3–37 years). Half of the carriers were from RPGR- or RP2-genotyped families. Methods Retrospective medical records review. Main Outcome Measures Visual acuities, visual field areas, final dark adaptation thresholds, and full-field ERGs to 0.5 Hz and 30 Hz flashes. Results In genotyped families, 40% of carriers showed a baseline abnormality on at least one of the three psychophysical tests. There was a wide range of function among carriers; for example 3 of 121 (2%) of genotyped carriers were legally blind due to poor visual acuity, some as young as 35 years of age. Visual fields were less affected than visual acuity. In all carriers, the average ERG amplitude to 30 Hz flashes was about 50% of normal, and the average exponential rate of amplitude loss over time was half that of XLRP males (3.7%/year vs 7.4%/year, respectively). Among obligate carriers with affected fathers and/or sons, 53 of 55 (96%) had abnormal baseline ERGs. Some carriers who initially had completely normal fundi in both eyes went on to develop moderately decreased vision, though not legal blindness. Among carriers with RPGR mutations, those with mutations in ORF15, compared to those in exons 1–14, had worse final dark adaptation thresholds and lower 0.5 Hz and 30 Hz ERG amplitudes. Conclusions Most carriers of XLRP had mildly or moderately reduced visual function but rarely became legally blind. In most cases, obligate carriers could be identified by ERG testing. Carriers of RPGR ORF15 mutations tended to have worse visual function than carriers of RPGR exon 1–14 mutations. Since XLRP carrier ERG amplitudes and decay rates over time were on average half of those of affected males, these observations were

  3. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P. G.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  4. PLOT3D/AMES, DEC VAX VMS VERSION USING DISSPLA (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  5. Shadow-driven 4D haptic visualization.

    PubMed

    Zhang, Hui; Hanson, Andrew

    2007-01-01

    Just as we can work with two-dimensional floor plans to communicate 3D architectural design, we can exploit reduced-dimension shadows to manipulate the higher-dimensional objects generating the shadows. In particular, by taking advantage of physically reactive 3D shadow-space controllers, we can transform the task of interacting with 4D objects to a new level of physical reality. We begin with a teaching tool that uses 2D knot diagrams to manipulate the geometry of 3D mathematical knots via their projections; our unique 2D haptic interface allows the user to become familiar with sketching, editing, exploration, and manipulation of 3D knots rendered as projected imageson a 2D shadow space. By combining graphics and collision-sensing haptics, we can enhance the 2D shadow-driven editing protocol to successfully leverage 2D pen-and-paper or blackboard skills. Building on the reduced-dimension 2D editing tool for manipulating 3D shapes, we develop the natural analogy to produce a reduced-dimension 3D tool for manipulating 4D shapes. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the experience accessible to human beings. As far as we are aware, this paper reports the first interactive system with force-feedback that provides "4D haptic visualization" permitting the user to model and interact with 4D cloth-like objects.

  6. Integrated Tsunami Database: simulation and identification of seismic tsunami sources, 3D visualization and post-disaster assessment on the shore

    NASA Astrophysics Data System (ADS)

    Krivorot'ko, Olga; Kabanikhin, Sergey; Marinin, Igor; Karas, Adel; Khidasheli, David

    2013-04-01

    One of the most important problems of tsunami investigation is the problem of seismic tsunami source reconstruction. Non-profit organization WAPMERR (http://wapmerr.org) has provided a historical database of alleged tsunami sources around the world that obtained with the help of information about seaquakes. WAPMERR also has a database of observations of the tsunami waves in coastal areas. The main idea of presentation consists of determining of the tsunami source parameters using seismic data and observations of the tsunami waves on the shore, and the expansion and refinement of the database of presupposed tsunami sources for operative and accurate prediction of hazards and assessment of risks and consequences. Also we present 3D visualization of real-time tsunami wave propagation and loss assessment, characterizing the nature of the building stock in cities at risk, and monitoring by satellite images using modern GIS technology ITRIS (Integrated Tsunami Research and Information System) developed by WAPMERR and Informap Ltd. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. The most suitable physical models related to simulation of tsunamis are based on shallow water equations. We consider the initial-boundary value problem in Ω := {(x,y) ?R2 : x ?(0,Lx ), y ?(0,Ly ), Lx,Ly > 0} for the well-known linear shallow water equations in the Cartesian coordinate system in terms of the liquid flow components in dimensional form Here ?(x,y,t) defines the free water surface vertical displacement, i.e. amplitude of a tsunami wave, q(x,y) is the initial amplitude of a tsunami wave. The lateral boundary is assumed to be a non-reflecting boundary of the domain, that is, it allows the free passage of the propagating waves. Assume that the free surface oscillation data at points (xm, ym) are given as a measured output data from tsunami records: fm(t) := ? (xm, ym,t), (xm

  7. Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing

    NASA Astrophysics Data System (ADS)

    Tucker, Emerson; Fotouhi, Javad; Unberath, Mathias; Lee, Sing Chun; Fuerst, Bernhard; Johnson, Alex; Armand, Mehran; Osgood, Greg M.; Navab, Nassir

    2018-03-01

    Pre-operative CT data is available for several orthopedic and trauma interventions, and is mainly used to identify injuries and plan the surgical procedure. In this work we propose an intuitive augmented reality environment allowing visualization of pre-operative data during the intervention, with an overlay of the optical information from the surgical site. The pre-operative CT volume is first registered to the patient by acquiring a single C-arm X-ray image and using 3D/2D intensity-based registration. Next, we use an RGBD sensor on the C-arm to fuse the optical information of the surgical site with patient pre-operative medical data and provide an augmented reality environment. The 3D/2D registration of the pre- and intra-operative data allows us to maintain a correct visualization each time the C-arm is repositioned or the patient moves. An overall mean target registration error (mTRE) and standard deviation of 5.24 +/- 3.09 mm was measured averaged over 19 C-arm poses. The proposed solution enables the surgeon to visualize pre-operative data overlaid with information from the surgical site (e.g. surgeon's hands, surgical tools, etc.) for any C-arm pose, and negates issues of line-of-sight and long setup times, which are present in commercially available systems.

  8. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  9. The 3D LAOKOON--Visual and Verbal in 3D Online Learning Environments.

    ERIC Educational Resources Information Center

    Liestol, Gunnar

    This paper reports on a project where three-dimensional (3D) online gaming environments were exploited for the purpose of academic communication and learning. 3D gaming environments are media and meaning rich and can provide inexpensive solutions for educational purposes. The experiment with teaching and discussions in this setting, however,…

  10. Using 3D dynamic models to reproduce X-ray properties of colliding wind binaries

    NASA Astrophysics Data System (ADS)

    Russell, Christopher Michael Post

    Colliding wind binaries (CWBs) are unique laboratories for X-ray astrophysics. The two massive stars contained in these systems have powerful radiatively driven stellar winds, and the conversion of their kinetic energy to heat (up to 108 K) at the wind-wind collision region generates hard thermal X-rays (up to 10 keV). Rich data sets exist of several multi-year-period systems, as well as key observations of shorter period systems, and detailed models are required to disentangle the phase-locked emission and absorption processes in these systems. To interpret these X-ray light curves and spectra, this dissertation models the wind-wind interaction of CWBs using 3D smoothed particle hydrodynamics (SPH), and solves the 3D formal solution of radiative transfer to synthesize the model X-ray properties, allowing direct comparison with the colliding-wind X-ray spectra observed by, e.g., RXTE and XMM. The multi-year-period, highly eccentric CWBs we examine are eta Carinae and WR140. For the commonly inferred primary mass loss rate of ˜10 -3 Msun/yr, eta Carinae's 3D model reproduces quite well the 2-10 keV RXTE light curve, hardness ratio, and dynamic spectra in absolute units. This agreement includes the ˜3 month X-ray minimum associated with the 1998.0 and 2003.5 periastron passages, which we find to occur as the primary wind encroaches into the secondary wind's acceleration region. This modeling provides further evidence that the observer is mainly viewing the system through the secondary's shock cone, and suggests that periastron occurs ~1 month after the onset of the X-ray minimum. The model RXTE observables of WR140 match the data well in absolute units, although the decrease in model X-rays around periastron is less than observed. There is very good agreement between the observed XMM spectrum taken on the rise before periastron and the model. We also model two short-period CWBs, HD150136, which has a wind-star collision, and delta Orionis A, the closest eclipsing

  11. Dynamic electronic collimation method for 3-D catheter tracking on a scanning-beam digital x-ray system

    PubMed Central

    Dunkerley, David A. P.; Slagowski, Jordan M.; Funk, Tobias; Speidel, Michael A.

    2017-01-01

    Abstract. Scanning-beam digital x-ray (SBDX) is an inverse geometry x-ray fluoroscopy system capable of tomosynthesis-based 3-D catheter tracking. This work proposes a method of dose-reduced 3-D catheter tracking using dynamic electronic collimation (DEC) of the SBDX scanning x-ray tube. This is achieved through the selective deactivation of focal spot positions not needed for the catheter tracking task. The technique was retrospectively evaluated with SBDX detector data recorded during a phantom study. DEC imaging of a catheter tip at isocenter required 340 active focal spots per frame versus 4473 spots in full field-of-view (FOV) mode. The dose-area product (DAP) and peak skin dose (PSD) for DEC versus full FOV scanning were calculated using an SBDX Monte Carlo simulation code. The average DAP was reduced to 7.8% of the full FOV value, consistent with the relative number of active focal spots (7.6%). For image sequences with a moving catheter, PSD was 33.6% to 34.8% of the full FOV value. The root-mean-squared-deviation between DEC-based 3-D tracking coordinates and full FOV 3-D tracking coordinates was less than 0.1 mm. The 3-D distance between the tracked tip and the sheath centerline averaged 0.75 mm. DEC is a feasible method for dose reduction during SBDX 3-D catheter tracking. PMID:28439521

  12. Chemozart: a web-based 3D molecular structure editor and visualizer platform.

    PubMed

    Mohebifar, Mohamad; Sajadi, Fatemehsadat

    2015-01-01

    Chemozart is a 3D Molecule editor and visualizer built on top of native web components. It offers an easy to access service, user-friendly graphical interface and modular design. It is a client centric web application which communicates with the server via a representational state transfer style web service. Both client-side and server-side application are written in JavaScript. A combination of JavaScript and HTML is used to draw three-dimensional structures of molecules. With the help of WebGL, three-dimensional visualization tool is provided. Using CSS3 and HTML5, a user-friendly interface is composed. More than 30 packages are used to compose this application which adds enough flexibility to it to be extended. Molecule structures can be drawn on all types of platforms and is compatible with mobile devices. No installation is required in order to use this application and it can be accessed through the internet. This application can be extended on both server-side and client-side by implementing modules in JavaScript. Molecular compounds are drawn on the HTML5 Canvas element using WebGL context. Chemozart is a chemical platform which is powerful, flexible, and easy to access. It provides an online web-based tool used for chemical visualization along with result oriented optimization for cloud based API (application programming interface). JavaScript libraries which allow creation of web pages containing interactive three-dimensional molecular structures has also been made available. The application has been released under Apache 2 License and is available from the project website https://chemozart.com.

  13. 3D visualization of optical ray aberration and its broadcasting to smartphones by ray aberration generator

    NASA Astrophysics Data System (ADS)

    Hellman, Brandon; Bosset, Erica; Ender, Luke; Jafari, Naveed; McCann, Phillip; Nguyen, Chris; Summitt, Chris; Wang, Sunglin; Takashima, Yuzuru

    2017-11-01

    The ray formalism is critical to understanding light propagation, yet current pedagogy relies on inadequate 2D representations. We present a system in which real light rays are visualized through an optical system by using a collimated laser bundle of light and a fog chamber. Implementation for remote and immersive access is enabled by leveraging a commercially available 3D viewer and gesture-based remote controlling of the tool via bi-directional communication over the Internet.

  14. A device that operates within a self-assembled 3D DNA crystal

    NASA Astrophysics Data System (ADS)

    Hao, Yudong; Kristiansen, Martin; Sha, Ruojie; Birktoft, Jens J.; Hernandez, Carina; Mao, Chengde; Seeman, Nadrian C.

    2017-08-01

    Structural DNA nanotechnology finds applications in numerous areas, but the construction of objects, 2D and 3D crystalline lattices and devices is prominent among them. Each of these components has been developed individually, and most of them have been combined in pairs. However, to date there are no reports of independent devices contained within 3D crystals. Here we report a three-state 3D device whereby we change the colour of the crystals by diffusing strands that contain dyes in or out of the crystals through the mother-liquor component of the system. Each colouring strand is designed to pair with an extended triangle strand by Watson-Crick base pairing. The arm that contains the dyes is quite flexible, but it is possible to establish the presence of the duplex proximal to the triangle by X-ray crystallography. We modelled the transition between the red and blue states through a simple kinetic model.

  15. Neural correlates of olfactory and visual memory performance in 3D-simulated mazes after intranasal insulin application.

    PubMed

    Brünner, Yvonne F; Rodriguez-Raecke, Rea; Mutic, Smiljana; Benedict, Christian; Freiherr, Jessica

    2016-10-01

    This fMRI study intended to establish 3D-simulated mazes with olfactory and visual cues and examine the effect of intranasally applied insulin on memory performance in healthy subjects. The effect of insulin on hippocampus-dependent brain activation was explored using a double-blind and placebo-controlled design. Following intranasal administration of either insulin (40IU) or placebo, 16 male subjects participated in two experimental MRI sessions with olfactory and visual mazes. Each maze included two separate runs. The first was an encoding maze during which subjects learned eight olfactory or eight visual cues at different target locations. The second was a recall maze during which subjects were asked to remember the target cues at spatial locations. For eleven included subjects in the fMRI analysis we were able to validate brain activation for odor perception and visuospatial tasks. However, we did not observe an enhancement of declarative memory performance in our behavioral data or hippocampal activity in response to insulin application in the fMRI analysis. It is therefore possible that intranasal insulin application is sensitive to the methodological variations e.g. timing of task execution and dose of application. Findings from this study suggest that our method of 3D-simulated mazes is feasible for studying neural correlates of olfactory and visual memory performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. 3D facial landmarks: Inter-operator variability of manual annotation

    PubMed Central

    2014-01-01

    Background Manual annotation of landmarks is a known source of variance, which exist in all fields of medical imaging, influencing the accuracy and interpretation of the results. However, the variability of human facial landmarks is only sparsely addressed in the current literature as opposed to e.g. the research fields of orthodontics and cephalometrics. We present a full facial 3D annotation procedure and a sparse set of manually annotated landmarks, in effort to reduce operator time and minimize the variance. Method Facial scans from 36 voluntary unrelated blood donors from the Danish Blood Donor Study was randomly chosen. Six operators twice manually annotated 73 anatomical and pseudo-landmarks, using a three-step scheme producing a dense point correspondence map. We analyzed both the intra- and inter-operator variability, using mixed-model ANOVA. We then compared four sparse sets of landmarks in order to construct a dense correspondence map of the 3D scans with a minimum point variance. Results The anatomical landmarks of the eye were associated with the lowest variance, particularly the center of the pupils. Whereas points of the jaw and eyebrows have the highest variation. We see marginal variability in regards to intra-operator and portraits. Using a sparse set of landmarks (n=14), that capture the whole face, the dense point mean variance was reduced from 1.92 to 0.54 mm. Conclusion The inter-operator variability was primarily associated with particular landmarks, where more leniently landmarks had the highest variability. The variables embedded in the portray and the reliability of a trained operator did only have marginal influence on the variability. Further, using 14 of the annotated landmarks we were able to reduced the variability and create a dense correspondences mesh to capture all facial features. PMID:25306436

  17. SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.

    2016-12-01

    Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is

  18. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITHOUT TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  19. PLOT3D/AMES, UNIX SUPERCOMPUTER AND SGI IRIS VERSION (WITH TURB3D)

    NASA Technical Reports Server (NTRS)

    Buning, P.

    1994-01-01

    PLOT3D is an interactive graphics program designed to help scientists visualize computational fluid dynamics (CFD) grids and solutions. Today, supercomputers and CFD algorithms can provide scientists with simulations of such highly complex phenomena that obtaining an understanding of the simulations has become a major problem. Tools which help the scientist visualize the simulations can be of tremendous aid. PLOT3D/AMES offers more functions and features, and has been adapted for more types of computers than any other CFD graphics program. Version 3.6b+ is supported for five computers and graphic libraries. Using PLOT3D, CFD physicists can view their computational models from any angle, observing the physics of problems and the quality of solutions. As an aid in designing aircraft, for example, PLOT3D's interactive computer graphics can show vortices, temperature, reverse flow, pressure, and dozens of other characteristics of air flow during flight. As critical areas become obvious, they can easily be studied more closely using a finer grid. PLOT3D is part of a computational fluid dynamics software cycle. First, a program such as 3DGRAPE (ARC-12620) helps the scientist generate computational grids to model an object and its surrounding space. Once the grids have been designed and parameters such as the angle of attack, Mach number, and Reynolds number have been specified, a "flow-solver" program such as INS3D (ARC-11794 or COS-10019) solves the system of equations governing fluid flow, usually on a supercomputer. Grids sometimes have as many as two million points, and the "flow-solver" produces a solution file which contains density, x- y- and z-momentum, and stagnation energy for each grid point. With such a solution file and a grid file containing up to 50 grids as input, PLOT3D can calculate and graphically display any one of 74 functions, including shock waves, surface pressure, velocity vectors, and particle traces. PLOT3D's 74 functions are organized into

  20. Virtual reality hardware for use in interactive 3D data fusion and visualization

    NASA Astrophysics Data System (ADS)

    Gourley, Christopher S.; Abidi, Mongi A.

    1997-09-01

    Virtual reality has become a tool for use in many areas of research. We have designed and built a VR system for use in range data fusion and visualization. One major VR tool is the CAVE. This is the ultimate visualization tool, but comes with a large price tag. Our design uses a unique CAVE whose graphics are powered by a desktop computer instead of a larger rack machine making it much less costly. The system consists of a screen eight feet tall by twenty-seven feet wide giving a variable field-of-view currently set at 160 degrees. A silicon graphics Indigo2 MaxImpact with the impact channel option is used for display. This gives the capability to drive three projectors at a resolution of 640 by 480 for use in displaying the virtual environment and one 640 by 480 display for a user control interface. This machine is also the first desktop package which has built-in hardware texture mapping. This feature allows us to quickly fuse the range and intensity data and other multi-sensory data. The final goal is a complete 3D texture mapped model of the environment. A dataglove, magnetic tracker, and spaceball are to be used for manipulation of the data and navigation through the virtual environment. This system gives several users the ability to interactively create 3D models from multiple range images.