Sample records for high-level 3d visualization

  1. The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L

    2016-11-01

    Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.

  2. Virtual reality on the web: the potentials of different methodologies and visualization techniques for scientific research and medical education.

    PubMed

    Kling-Petersen, T; Pascher, R; Rydmark, M

    1999-01-01

    Academic and medical imaging are increasingly using computer based 3D reconstruction and/or visualization. Three-dimensional interactive models play a major role in areas such as preclinical medical education, clinical visualization and medical research. While 3D is comparably easy to do on a high end workstations, distribution and use of interactive 3D graphics necessitate the use of personal computers and the web. Several new techniques have been demonstrated providing interactive 3D via a web browser thereby allowing a limited version of VR to be experienced by a larger majority of students, medical practitioners and researchers. These techniques include QuickTimeVR2 (QTVR), VRML2, QuickDraw3D, OpenGL and Java3D. In order to test the usability of the different techniques, Mednet have initiated a number of projects designed to evaluate the potentials of 3D techniques for scientific reporting, clinical visualization and medical education. These include datasets created by manual tracing followed by triangulation, smoothing and 3D visualization, MRI or high-resolution laserscanning. Preliminary results indicate that both VRML and QTVR fulfills most of the requirements of web based, interactive 3D visualization, whereas QuickDraw3D is too limited. Presently, the JAVA 3D has not yet reached a level where in depth testing is possible. The use of high-resolution laserscanning is an important addition to 3D digitization.

  3. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  4. Experimental evidence for improved neuroimaging interpretation using three-dimensional graphic models.

    PubMed

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.

  5. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  6. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  7. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  8. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  9. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets containing the same object in different LoD may be combined and integrated. In this study GIS tools used for 3D modeling issues were examined. In this context, the availability of the GIS tools for obtaining different LoDs of CityGML standard. Additionally a 3D GIS application that covers a small part of the city of Istanbul was implemented for communicating the thematic information rather than photorealistic visualization by using 3D model. An abstract model was created by using a commercial GIS software modeling tools and the results of the implementation were also presented in the study.

  10. 3D elastic control for mobile devices.

    PubMed

    Hachet, Martin; Pouderoux, Joachim; Guitton, Pascal

    2008-01-01

    To increase the input space of mobile devices, the authors developed a proof-of-concept 3D elastic controller that easily adapts to mobile devices. This embedded device improves the completion of high-level interaction tasks such as visualization of large documents and navigation in 3D environments. It also opens new directions for tomorrow's mobile applications.

  11. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  12. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  13. Does high dose vitamin D supplementation enhance cognition?: A randomized trial in healthy adults.

    PubMed

    Pettersen, Jacqueline A

    2017-04-01

    Insufficiency of 25-hydroxyvitamin D [25(OH)D] has been associated with dementia and cognitive decline. However, the effects of vitamin D supplementation on cognition are unclear. It was hypothesized that high dose vitamin D3 supplementation would result in enhanced cognitive functioning, particularly among adults whose 25(OH)D levels were insufficient (<75nmol/L) at baseline. Healthy adults (n=82) from northern British Columbia, Canada (54° north latitude) with baseline 25(OH)D levels ≤100nmol/L were randomized and blinded to High Dose (4000IU/d) versus Low Dose (400IU/d) vitamin D3 (cholecalciferol) for 18weeks. Baseline and follow-up serum 25(OH)D and cognitive performance were assessed and the latter consisted of: Symbol Digit Modalities Test, verbal (phonemic) fluency, digit span, and the CANTAB® computerized battery. There were no significant baseline differences between Low (n=40) and High (n=42) dose groups. Serum 25(OH)D increased significantly more in the High Dose (from 67.2±20 to 130.6±26nmol/L) than the Low Dose group (60.5±22 to 85.9±16nmol/L), p=0.0001. Performance improved in the High Dose group on nonverbal (visual) memory, as assessed by the Pattern Recognition Memory task (PRM), from 84.1±14.9 to 88.3±13.2, p=0.043 (d=0.3) and Paired Associates Learning Task, (PAL) number of stages completed, from 4.86±0.35 to 4.95±0.22, p=0.044 (d=0.5), but not in the Low Dose Group. Mixed effects modeling controlling for age, education, sex and baseline performance revealed that the degree of improvement was comparatively greater in the High Dose Group for these tasks, approaching significance: PRM, p=0.11 (d=0.4), PAL, p=0.058 (d=0.4). Among those who had insufficient 25(OH)D (<75nmol/L) at baseline, the High Dose group (n=23) improved significantly (p=0.005, d=0.7) and to a comparatively greater degree on the PRM (p=0.025, d=0.6). Nonverbal (visual) memory seems to benefit from higher doses of vitamin D supplementation, particularly among those who are insufficient (<75nmol/L) at baseline, while verbal memory and other cognitive domains do not. These findings are consistent with recent cross-sectional and longitudinal studies, which have demonstrated significant positive associations between 25(OH)D levels and nonverbal, but not verbal, memory. While our findings require confirmation, they suggest that higher 25(OH)D is particularly important for higher level cognitive functioning, specifically nonverbal (visual) memory, which also utilizes executive functioning processes. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. The Limits of Shape Recognition following Late Emergence from Blindness.

    PubMed

    McKyton, Ayelet; Ben-Zion, Itay; Doron, Ravid; Zohary, Ehud

    2015-09-21

    Visual object recognition develops during the first years of life. But what if one is deprived of vision during early post-natal development? Shape information is extracted using both low-level cues (e.g., intensity- or color-based contours) and more complex algorithms that are largely based on inference assumptions (e.g., illumination is from above, objects are often partially occluded). Previous studies, testing visual acuity using a 2D shape-identification task (Lea symbols), indicate that contour-based shape recognition can improve with visual experience, even after years of visual deprivation from birth. We hypothesized that this may generalize to other low-level cues (shape, size, and color), but not to mid-level functions (e.g., 3D shape from shading) that might require prior visual knowledge. To that end, we studied a unique group of subjects in Ethiopia that suffered from an early manifestation of dense bilateral cataracts and were surgically treated only years later. Our results suggest that the newly sighted rapidly acquire the ability to recognize an odd element within an array, on the basis of color, size, or shape differences. However, they are generally unable to find the odd shape on the basis of illusory contours, shading, or occlusion relationships. Little recovery of these mid-level functions is seen within 1 year post-operation. We find that visual performance using low-level cues is relatively robust to prolonged deprivation from birth. However, the use of pictorial depth cues to infer 3D structure from the 2D retinal image is highly susceptible to early and prolonged visual deprivation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  16. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. [Effect of artificial media opacities on Frisén ring perimetry and conventional light sense perimetry. A comparative study].

    PubMed

    Meyer, J H; Funk, J

    1994-04-01

    In this study we compare the influence of blurring by diffusor foils (Bangerter) on visual acuity and on the thresholds of ring and light sense perimetry. Light sense perimetry was performed using the G1 program of the Octopus 1-2-3 perimeter [1], and ring perimetry with the "ring" test, version 2.20 (High-Tech-Vision) designed by Frisén [4]. Ten eyes of ten healthy persons with a visual acuity of 1.25 or better were examined at six different levels corresponding to visual acuities between 1.6 and hand movements. With both perimeters sensitivity decreased with decreasing visual acuity. At good visual acuities (1.2-1.6) no changes were found in either ring perimetry or light sense perimetry. At acuity levels of 0.8 and below a more pronounced decrease in sensitivity was found with the ring perimeter than with the light sense perimeter. At the level of hand movements there were only absolute scotomas in the ring perimeter, while the Octopus 1-2-3 still detected a baseline sensitivity. Sensitivity was correlated with the logarithm of the visual acuity with both perimeters (Octopus 1-2-3: r = 0.99, P < 0.001; ring perimeter: r = 0.98, P < 0.001). The decrease in sensitivity per log-unit of visual acuity was 9.43 dB (Octopus 1-2-3) or 5.19 dB (ring perimeter). The ring perimeter, at least in its currently available version giving an absolute scotoma at mean scores > 14 dB, is obviously more sensitive to media opacities than the Octopus 1-2-3. This may be of importance in the clinical evaluation of the test results.

  18. Reliability of visual and instrumental color matching.

    PubMed

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in everyday dental practice to enhance the esthetic outcome. © 2017 Wiley Periodicals, Inc.

  19. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  20. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. The Role of Research Institutions in Building Visual Content for the Geowall

    NASA Astrophysics Data System (ADS)

    Newman, R. L.; Kilb, D.; Nayak, A.; Kent, G.

    2003-12-01

    The advent of the low-cost Geowall (http://www.geowall.org) allows researchers and students to study 3-D geophysical datasets in a collaborative setting. Although 3-D visual objects can aid the understanding of geological principles in the classroom, it is often difficult for staff to develop their own custom visual objects. This is a fundamentally important aspect that research institutions that store large (terabyte) geophysical datasets can address. At Scripps Institution of Oceanography (SIO) we regularly explore gigabyte 3-D visual objects in the SIO Visualization Center (http://siovizcenter.ucsd.edu). Exporting these datasets for use with the Geowall has become routine with current software applications such as IVS's Fledermaus and iView3D. We have developed visualizations that incorporate topographic, bathymetric, and 3-D volumetric crustal datasets to demonstrate fundamental principles of earth science including plate tectonics, seismology, sea-level change, and neotectonics. These visualizations are available for download either via FTP or a website, and have been incorporated into graduate and undergraduate classes at both SIO and the University of California, San Diego. Additionally, staff at the Visualization Center develop content for external schools and colleges such as the Preuss School, a local middle/high school, where a Geowall was installed in February 2003 and curriculum developed for 8th grade students. We have also developed custom visual objects for researchers and educators at diverse education institutions across the globe. At SIO we encourage graduate students and researchers alike to develop visual objects of their datasets through innovative classes and competitions. This not only assists the researchers themselves in understanding their data but also increases the number of visual objects freely available to geoscience educators worldwide.

  2. An Effective Construction Method of Modular Manipulator 3D Virtual Simulation Platform

    NASA Astrophysics Data System (ADS)

    Li, Xianhua; Lv, Lei; Sheng, Rui; Sun, Qing; Zhang, Leigang

    2018-06-01

    This work discusses about a fast and efficient method of constructing an open 3D manipulator virtual simulation platform which make it easier for teachers and students to learn about positive and inverse kinematics of a robot manipulator. The method was carried out using MATLAB. In which, the Robotics Toolbox, MATLAB GUI and 3D animation with the help of modelling using SolidWorks, were fully applied to produce a good visualization of the system. The advantages of using quickly build is its powerful function of the input and output and its ability to simulate a 3D manipulator realistically. In this article, a Schunk six DOF modular manipulator was constructed by the author's research group to be used as example. The implementation steps of this method was detailed described, and thereafter, a high-level open and realistic visualization manipulator 3D virtual simulation platform was achieved. With the graphs obtained from simulation, the test results show that the manipulator 3D virtual simulation platform can be constructed quickly with good usability and high maneuverability, and it can meet the needs of scientific research and teaching.

  3. Comparative analysis of visual outcomes with 4 intraocular lenses: Monofocal, multifocal, and extended range of vision.

    PubMed

    Pedrotti, Emilio; Carones, Francesco; Aiello, Francesco; Mastropasqua, Rodolfo; Bruni, Enrico; Bonacci, Erika; Talli, Pietro; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio

    2018-02-01

    To compare the visual acuity, refractive outcomes, and quality of vision in patients with bilateral implantation of 4 intraocular lenses (IOLs). Department of Neurosciences, Biomedicine and Movement Sciences, Eye Clinic, University of Verona, Verona, and Carones Ophthalmology Center, Milano, Italy. Prospective case series. The study included patients who had bilateral cataract surgery with the implantation of 1 of 4 IOLs as follows: Tecnis 1-piece monofocal (monofocal IOL), Tecnis Symfony extended range of vision (extended-range-of-vision IOL), Restor +2.5 diopter (D) (+2.5 D multifocal IOL), and Restor +3.0 D (+3.0 D multifocal IOL). Visual acuity, refractive outcome, defocus curve, objective optical quality, contrast sensitivity, spectacle independence, and glare perception were evaluated 6 months after surgery. The study comprised 185 patients. The extended-range-of-vision IOL (55 patients) showed better distance visual outcomes than the monofocal IOL (30 patients) and high-addition apodized diffractive-refractive multifocal IOLs (P ≤ .002). The +3.0 D multifocal IOL (50 patients) showed the best near visual outcomes (P < .001). The +2.5 D multifocal IOL (50 patients) and extended-range-of-vision IOL provided significantly better intermediate visual outcomes than the other 2 IOLs, with significantly better vision for a defocus level of -1.5 D (P < .001). Better spectacle independence was shown for the +2.5 D multifocal IOL and extended-range-of-vision IOL (P < .001). The extended-range-of-vision IOL and +2.5 D multifocal IOL provided significantly better intermediate visual restoration after cataract surgery than the monofocal IOL and +3.0 D multifocal IOL, with significantly better quality of vision with the extended-range-of-vision IOL. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  4. New generation of 3D desktop computer interfaces

    NASA Astrophysics Data System (ADS)

    Skerjanc, Robert; Pastoor, Siegmund

    1997-05-01

    Today's computer interfaces use 2-D displays showing windows, icons and menus and support mouse interactions for handling programs and data files. The interface metaphor is that of a writing desk with (partly) overlapping sheets of documents placed on its top. Recent advances in the development of 3-D display technology give the opportunity to take the interface concept a radical stage further by breaking the design limits of the desktop metaphor. The major advantage of the envisioned 'application space' is, that it offers an additional, immediately perceptible dimension to clearly and constantly visualize the structure and current state of interrelations between documents, videos, application programs and networked systems. In this context, we describe the development of a visual operating system (VOS). Under VOS, applications appear as objects in 3-D space. Users can (graphically connect selected objects to enable communication between the respective applications. VOS includes a general concept of visual and object oriented programming for tasks ranging from, e.g., low-level programming up to high-level application configuration. In order to enable practical operation in an office or at home for many hours, the system should be very comfortable to use. Since typical 3-D equipment used, e.g., in virtual-reality applications (head-mounted displays, data gloves) is rather cumbersome and straining, we suggest to use off-head displays and contact-free interaction techniques. In this article, we introduce an autostereoscopic 3-D display and connected video based interaction techniques which allow viewpoint-depending imaging (by head tracking) and visually controlled modification of data objects and links (by gaze tracking, e.g., to pick, 3-D objects just by looking at them).

  5. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  6. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  7. 3D Surveying, Modeling and Geo-Information System of the New Campus of ITB-Indonesia

    NASA Astrophysics Data System (ADS)

    Suwardhi, D.; Trisyanti, S. W.; Ainiyah, N.; Fajri, M. N.; Hanan, H.; Virtriana, R.; Edmarani, A. A.

    2016-10-01

    The new campus of ITB-Indonesia, which is located at Jatinangor, requires good facilities and infrastructures to supporting all of campus activities. Those can not be separated from procurement and maintenance activities. Technology for procurement and maintenance of facilities and infrastructures -based computer (information system)- has been known as Building Information Modeling (BIM). Nowadays, that technology is more affordable with some of free software that easy to use and tailored to user needs. BIM has some disadvantages and it requires other technologies to complete it, namely Geographic Information System (GIS). BIM and GIS require surveying data to visualized landscape and buildings on Jatinangor ITB campus. This paper presents the on-going of an internal service program conducted by the researcher, academic staff and students for the university. The program including 3D surveying to support the data requirements for 3D modeling of buildings in CityGML and Industry Foundation Classes (IFC) data model. The entire 3D surveying will produce point clouds that can be used to make 3D model. The 3D modeling is divided into low and high levels of detail modeling. The low levels model is stored in 3D CityGML database, and the high levels model including interiors is stored in BIM Server. 3D model can be used to visualized the building and site of Jatinangor ITB campus. For facility management of campus, an geo-information system is developed that can be used for planning, constructing, and maintaining Jatinangor ITB's facilities and infrastructures. The system uses openMAINT, an open source solution for the Property & Facility Management.

  8. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  9. Evaluating a Novel 3D Stereoscopic Visual Display for Transanal Endoscopic Surgery: A Randomized Controlled Crossover Study.

    PubMed

    Di Marco, Aimee N; Jeyakumar, Jenifa; Pratt, Philip J; Yang, Guang-Zhong; Darzi, Ara W

    2016-01-01

    To compare surgical performance with transanal endoscopic surgery (TES) using a novel 3-dimensional (3D) stereoscopic viewer against the current modalities of a 3D stereoendoscope, 3D, and 2-dimensional (2D) high-definition monitors. TES is accepted as the primary treatment for selected rectal tumors. Current TES systems offer a 2D monitor, or 3D image, viewed directly via a stereoendoscope, necessitating an uncomfortable operating position. To address this and provide a platform for future image augmentation, a 3D stereoscopic display was created. Forty participants, of mixed experience level, completed a simulated TES task using 4 visual displays (novel stereoscopic viewer and currently utilized stereoendoscope, 3D, and 2D high-definition monitors) in a randomly allocated order. Primary outcome measures were: time taken, path length, and accuracy. Secondary outcomes were: task workload and participant questionnaire results. Median time taken and path length were significantly shorter for the novel viewer versus 2D and 3D, and not significantly different to the traditional stereoendoscope. Significant differences were found in accuracy, task workload, and questionnaire assessment in favor of the novel viewer, as compared to all 3 modalities. This novel 3D stereoscopic viewer allows surgical performance in TES equivalent to that achieved using the current stereoendoscope and superior to standard 2D and 3D displays, but with lower physical and mental demands for the surgeon. Participants expressed a preference for this system, ranking it more highly on a questionnaire. Clinical translation of this work has begun with the novel viewer being used in 5 TES patients.

  10. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  11. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    PubMed

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  12. 3D geospatial visualizations: Animation and motion effects on spatial objects

    NASA Astrophysics Data System (ADS)

    Evangelidis, Konstantinos; Papadopoulos, Theofilos; Papatheodorou, Konstantinos; Mastorokostas, Paris; Hilas, Constantinos

    2018-02-01

    Digital Elevation Models (DEMs), in combination with high quality raster graphics provide realistic three-dimensional (3D) representations of the globe (virtual globe) and amazing navigation experience over the terrain through earth browsers. In addition, the adoption of interoperable geospatial mark-up languages (e.g. KML) and open programming libraries (Javascript) makes it also possible to create 3D spatial objects and convey on them the sensation of any type of texture by utilizing open 3D representation models (e.g. Collada). One step beyond, by employing WebGL frameworks (e.g. Cesium.js, three.js) animation and motion effects are attributed on 3D models. However, major GIS-based functionalities in combination with all the above mentioned visualization capabilities such as for example animation effects on selected areas of the terrain texture (e.g. sea waves) as well as motion effects on 3D objects moving in dynamically defined georeferenced terrain paths (e.g. the motion of an animal over a hill, or of a big fish in an ocean etc.) are not widely supported at least by open geospatial applications or development frameworks. Towards this we developed and made available to the research community, an open geospatial software application prototype that provides high level capabilities for dynamically creating user defined virtual geospatial worlds populated by selected animated and moving 3D models on user specified locations, paths and areas. At the same time, the generated code may enhance existing open visualization frameworks and programming libraries dealing with 3D simulations, with the geospatial aspect of a virtual world.

  13. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  14. Single Trial EEG Patterns for the Prediction of Individual Differences in Fluid Intelligence.

    PubMed

    Qazi, Emad-Ul-Haq; Hussain, Muhammad; Aboalsamh, Hatim; Malik, Aamir Saeed; Amin, Hafeez Ullah; Bamatraf, Saeed

    2016-01-01

    Assessing a person's intelligence level is required in many situations, such as career counseling and clinical applications. EEG evoked potentials in oddball task and fluid intelligence score are correlated because both reflect the cognitive processing and attention. A system for prediction of an individual's fluid intelligence level using single trial Electroencephalography (EEG) signals has been proposed. For this purpose, we employed 2D and 3D contents and 34 subjects each for 2D and 3D, which were divided into low-ability (LA) and high-ability (HA) groups using Raven's Advanced Progressive Matrices (RAPM) test. Using visual oddball cognitive task, neural activity of each group was measured and analyzed over three midline electrodes (Fz, Cz, and Pz). To predict whether an individual belongs to LA or HA group, features were extracted using wavelet decomposition of EEG signals recorded in visual oddball task and support vector machine (SVM) was used as a classifier. Two different types of Haar wavelet transform based features have been extracted from the band (0.3 to 30 Hz) of EEG signals. Statistical wavelet features and wavelet coefficient features from the frequency bands 0.0-1.875 Hz (delta low) and 1.875-3.75 Hz (delta high), resulted in the 100 and 98% prediction accuracies, respectively, both for 2D and 3D contents. The analysis of these frequency bands showed clear difference between LA and HA groups. Further, discriminative values of the features have been validated using statistical significance tests and inter-class and intra-class variation analysis. Also, statistical test showed that there was no effect of 2D and 3D content on the assessment of fluid intelligence level. Comparisons with state-of-the-art techniques showed the superiority of the proposed system.

  15. Large Terrain Continuous Level of Detail 3D Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan

    2012-01-01

    This software solved the problem of displaying terrains that are usually too large to be displayed on standard workstations in real time. The software can visualize terrain data sets composed of billions of vertices, and can display these data sets at greater than 30 frames per second. The Large Terrain Continuous Level of Detail 3D Visualization Tool allows large terrains, which can be composed of billions of vertices, to be visualized in real time. It utilizes a continuous level of detail technique called clipmapping to support this. It offloads much of the work involved in breaking up the terrain into levels of details onto the GPU (graphics processing unit) for faster processing.

  16. A Novel 2D Image Compression Algorithm Based on Two Levels DWT and DCT Transforms with Enhanced Minimize-Matrix-Size Algorithm for High Resolution Structured Light 3D Surface Reconstruction

    NASA Astrophysics Data System (ADS)

    Siddeq, M. M.; Rodrigues, M. A.

    2015-09-01

    Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.

  17. Vertical visual features have a strong influence on cuttlefish camouflage.

    PubMed

    Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T

    2013-04-01

    Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.

  18. Advances in visual representation of molecular potentials.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  19. Short-term outcomes of small-incision lenticule extraction (SMILE) for low, medium, and high myopia.

    PubMed

    Fernández, Joaquín; Valero, Almudena; Martínez, Javier; Piñero, David P; Rodríguez-Vallejo, Manuel

    2017-03-10

    To determine the safety, efficacy, and predictability of small-incision lenticule extraction at 6-month follow-up, depending on the level of the myopic refractive error. The surgeries were performed by a surgeon new to this technique. Seventy-one subjects with a mean age of 31.86 ± 5.57 years were included in this retrospective observational study. Subjects were divided into 3 groups depending on the preoperative spherical equivalent (SE): low group from -1.00 D to -3.00 D, medium from -3.25 D to -5.00 D, and high from -5.25 D to -7.00 D. Manifest refraction, corrected distance visual acuity (CDVA), and uncorrected distance visual acuity (UDVA) were measured before surgery and at 6 months after the treatment. In total, 1.4% of the eyes lost 1 line of CDVA after the procedure, whereas 95.8% remained unchanged and 2.8% gained 1 line. A significant undercorrection (p = 0.031) was found in the high myopia group (median -0.50 D), whereas the low and medium groups remained near to emmetropia. In terms of efficacy, no statistically significant intergroup differences for postoperative UDVA (p = 0.282) were found. The vector analysis also showed undercorrection of the preoperative cylinder, even though the standard deviations decreased from 0.9 D in the x axis and 0.7 D in the y axis to 0.24 D and 0.27 D, respectively. Small-incision lenticule extraction might be a safe, effective, and predictable procedure even for inexperienced surgeons. No differences in efficacy were found among myopia levels even though undercorrections were found for SE and cylinder in high myopia.

  20. Novel Visualization of Large Health Related Data Sets

    DTIC Science & Technology

    2015-03-01

    Health Record Data: A Systematic Review B: McPeek Hinz E, Borland D, Shah H, West V, Hammond WE. Temporal Visualization of Diabetes Mellitus via Hemoglobin ...H, Borland D, McPeek Hinz E, West V, Hammond WE. Demonstration of Temporal Visualization of Diabetes Mellitus via Hemoglobin A1C Levels E... Hemoglobin A1c Levels and MultivariateVisualization of System-Wide National Health Service Data Using Radial Coordinates. (Copies in Appendix) 4.3

  1. Visualizing topography: Effects of presentation strategy, gender, and spatial ability

    NASA Astrophysics Data System (ADS)

    McAuliffe, Carla

    2003-10-01

    This study investigated the effect of different presentation strategies (2-D static visuals, 3-D animated visuals, and 3-D interactive, animated visuals) and gender on achievement, time-spent-on visual treatment, and attitude during a computer-based science lesson about reading and interpreting topographic maps. The study also examined the relationship of spatial ability and prior knowledge to gender, achievement, and time-spent-on visual treatment. Students enrolled in high school chemistry-physics were pretested and given two spatial ability tests. They were blocked by gender and randomly assigned to one of three levels of presentation strategy or the control group. After controlling for the effects of spatial ability and prior knowledge with analysis of covariance, three significant differences were found between the versions: (a) the 2-D static treatment group scored significantly higher on the posttest than the control group; (b) the 3-D animated treatment group scored significantly higher on the posttest than the control group; and (c) the 2-D static treatment group scored significantly higher on the posttest than the 3-D interactive animated treatment group. Furthermore, the 3-D interactive animated treatment group spent significantly more time on the visual screens than the 2-D static treatment group. Analyses of student attitudes revealed that most students felt the landform visuals in the computer-based program helped them learn, but not in a way they would describe as fun. Significant differences in attitude were found by treatment and by gender. In contrast to findings from other studies, no gender differences were found on either of the two spatial tests given in this study. Cognitive load, cognitive involvement, and solution strategy are offered as three key factors that may help explain the results of this study. Implications for instructional design include suggestions about the use of 2-D static, 3-D animated and 3-D interactive animations as well as a recommendation about the inclusion of pretests in similar instructional programs. Areas for future research include investigating the effects of combinations of presentation strategies, continuing to examine the role of spatial ability in science achievement, and gaining cognitive insights about what it is that students do when learning to read and interpret topographic maps.

  2. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  3. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  4. Space Partitioning for Privacy Enabled 3D City Models

    NASA Astrophysics Data System (ADS)

    Filippovska, Y.; Wichmann, A.; Kada, M.

    2016-10-01

    Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.

  5. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  6. Biomechanical ToolKit: Open-source framework to visualize and process biomechanical data.

    PubMed

    Barre, Arnaud; Armand, Stéphane

    2014-04-01

    C3D file format is widely used in the biomechanical field by companies and laboratories to store motion capture systems data. However, few software packages can visualize and modify the integrality of the data in the C3D file. Our objective was to develop an open-source and multi-platform framework to read, write, modify and visualize data from any motion analysis systems using standard (C3D) and proprietary file formats (used by many companies producing motion capture systems). The Biomechanical ToolKit (BTK) was developed to provide cost-effective and efficient tools for the biomechanical community to easily deal with motion analysis data. A large panel of operations is available to read, modify and process data through C++ API, bindings for high-level languages (Matlab, Octave, and Python), and standalone application (Mokka). All these tools are open-source and cross-platform and run on all major operating systems (Windows, Linux, MacOS X). Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  7. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  8. Nerves of Steel: a Low-Cost Method for 3D Printing the Cranial Nerves.

    PubMed

    Javan, Ramin; Davidson, Duncan; Javan, Afshin

    2017-10-01

    Steady-state free precession (SSFP) magnetic resonance imaging (MRI) can demonstrate details down to the cranial nerve (CN) level. High-resolution three-dimensional (3D) visualization can now quickly be performed at the workstation. However, we are still limited by visualization on flat screens. The emerging technologies in rapid prototyping or 3D printing overcome this limitation. It comprises a variety of automated manufacturing techniques, which use virtual 3D data sets to fabricate solid forms in a layer-by-layer technique. The complex neuroanatomy of the CNs may be better understood and depicted by the use of highly customizable advanced 3D printed models. In this technical note, after manually perfecting the segmentation of each CN and brain stem on each SSFP-MRI image, initial 3D reconstruction was performed. The bony skull base was also reconstructed from computed tomography (CT) data. Autodesk 3D Studio Max, available through freeware student/educator license, was used to three-dimensionally trace the 3D reconstructed CNs in order to create smooth graphically designed CNs and to assure proper fitting of the CNs into their respective neural foramina and fissures. This model was then 3D printed with polyamide through a commercial online service. Two different methods are discussed for the key segmentation and 3D reconstruction steps, by either using professional commercial software, i.e., Materialise Mimics, or utilizing a combination of the widely available software Adobe Photoshop, as well as a freeware software, OsiriX Lite.

  9. Visualization of Coastal Data Through KML

    NASA Astrophysics Data System (ADS)

    Damsma, T.; Baart, F.; de Boer, G.; van Koningsveld, M.; Bruens, A.

    2009-12-01

    As a country that lies mostly below sea level, the Netherlands has a history of coastal engineering, and is world renowned for its leading role in Integrated Coastal Zone Management (ICZM). Within the framework of Building with Nature (a Dutch ICZM research program) an OPeNDAP server is used to host several datasets of the Dutch coast. Among these sets are bathymetric data, cross-shore profiles, water level time series of which some date back to the eighteenth century. The challenge with hosting this amount of data is more in dissemination and accessibility rather than a technical one (tracing, accessing, gathering, unifying and storing). With so many data in different sets, how can one easily know when and where data is available, and of what quality it is? Recent work using Google Earth as a visual front-end for this database has proven very encouraging. Taking full advantage of the four dimensional (3D+time) visualization capabilities allows researchers, consultants and the general public to view, access and interact with the data. Within MATLAB a set of generic tools are developed for easy creation of among others:

    • A high resolution, time animated, historic bathymetry of the entire Dutch coast.
    • 3D curvilinear computation grids.
    • A 3D contour plot of the Westerschelde Estuary.
    • Time animated wind and water flow fields, both with traditional quiver diagrams and arrows that move with the flow field.
    • Various overviews of markers containing direct web links to data and metadata on OPeNDAP server. Wind field (arrows) and water level elevation for model calculations of Katrina (animated over 14 days) Coastal cross sections (with exaggerated hight) and 2D positions of high and low water lines (animated over 40 years)

    • Mass data graphics requirements for symbol generators: example 2D airport navagation and 3D terrain function

      NASA Astrophysics Data System (ADS)

      Schiefele, Jens; Bader, Joachim; Kastner, S.; Wiesemann, Thorsten; von Viebahn, Harro

      2002-07-01

      Next generation of cockpit display systems will display mass data. Mass data includes terrain, obstacle, and airport databases. Display formats will be two and eventually 3D. A prerequisite for the introduction of these new functions is the availability of certified graphics hardware. The paper describes functionality and required features of an aviation certified 2D/3D graphics board. This graphics board should be based on low-level and hi-level API calls. These graphic calls should be very similar to OpenGL. All software and the API must be aviation certified. As an example application, a 2D airport navigation function and a 3D terrain visualization is presented. The airport navigation format is based on highly precise airport database following EUROCAE ED-99/RTCA DO-272 specifications. Terrain resolution is based on EUROCAE ED-98/RTCA DO-276 requirements.

    • Human microbiome visualization using 3D technology.

      PubMed

      Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C

      2011-01-01

      High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.

    • Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

      NASA Astrophysics Data System (ADS)

      Li, Hong; Luo, Ting; Xu, Haiyong

      2017-06-01

      Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

    • Visualization of volumetric seismic data

      NASA Astrophysics Data System (ADS)

      Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

      2015-04-01

      Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

    • Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

      PubMed

      Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

      2018-05-10

      Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

    • CAIPIRINHA accelerated SPACE enables 10-min isotropic 3D TSE MRI of the ankle for optimized visualization of curved and oblique ligaments and tendons.

      PubMed

      Kalia, Vivek; Fritz, Benjamin; Johnson, Rory; Gilson, Wesley D; Raithel, Esther; Fritz, Jan

      2017-09-01

      To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p < 0.001), whereas 3D images had substantially less partial volume, chemical shift and no pulsatile-flow artifacts (p < 0.001). Oblique and curved planar 3D images resulted in mildly-to-substantially improved visualization of joints, spring, bifurcate, syndesmotic, collateral and sinus tarsi ligaments, and tendons (p < 0.001, respectively). 3D TSE MRI with CAIPIRINHA acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. • High-resolution 3D TSE MRI improves visualization of ankle structures. • Limitations of current 3D TSE MRI include long scan times. • 3D CAIPIRINHA SPACE allows now a fourfold-accelerated data acquisition. • 3D CAIPIRINHA SPACE enables high-spatial-resolution ankle MRI within 10 min. • 10-min 3D CAIPIRINHA SPACE produces equal-or-better quality than 20-min 2D TSE.

    • Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

      NASA Technical Reports Server (NTRS)

      Domik, Gitta; Alam, Salim; Pinkney, Paul

      1992-01-01

      This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

    • Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

      NASA Astrophysics Data System (ADS)

      Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

      2013-03-01

      Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

    • The role of extra-foveal processing in 3D imaging

      NASA Astrophysics Data System (ADS)

      Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

      2017-03-01

      The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

    • Scalable Multi-Platform Distribution of Spatial 3d Contents

      NASA Astrophysics Data System (ADS)

      Klimke, J.; Hagedorn, B.; Döllner, J.

      2013-09-01

      Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

    • Symbolic modeling of human anatomy for visualization and simulation

      NASA Astrophysics Data System (ADS)

      Pommert, Andreas; Schubert, Rainer; Riemer, Martin; Schiemann, Thomas; Tiede, Ulf; Hoehne, Karl H.

      1994-09-01

      Visualization of human anatomy in a 3D atlas requires both spatial and more abstract symbolic knowledge. Within our 'intelligent volume' model which integrates these two levels, we developed and implemented a semantic network model for describing human anatomy. Concepts for structuring (abstraction levels, domains, views, generic and case-specific modeling, inheritance) are introduced. Model, tools for generation and exploration and applications in our 3D anatomical atlas are presented and discussed.

  1. Development and Analysis of New 3D Tactile Materials for the Enhancement of STEM Education for the Blind and Visually Impaired

    NASA Astrophysics Data System (ADS)

    Gonzales, Ashleigh

    Blind and visually impaired individuals have historically demonstrated a low participation in the fields of science, engineering, mathematics, and technology (STEM). This low participation is reflected in both their education and career choices. Despite the establishment of the Americans with Disabilities Act (ADA) and the Individuals with Disabilities Education Act (IDEA), blind and visually impaired (BVI) students continue to academically fall below the level of their sighted peers in the areas of science and math. Although this deficit is created by many factors, this study focuses on the lack of adequate accessible image based materials. Traditional methods for creating accessible image materials for the vision impaired have included detailed verbal descriptions accompanying an image or conversion into a simplified tactile graphic. It is very common that no substitute materials will be provided to students within STEM courses because they are image rich disciplines and often include a large number images, diagrams and charts. Additionally, images that are translated into text or simplified into basic line drawings are frequently inadequate because they rely on the interpretations of resource personnel who do not have expertise in STEM. Within this study, a method to create a new type of tactile 3D image was developed using High Density Polyethylene (HDPE) and Computer Numeric Control (CNC) milling. These tactile image boards preserve high levels of detail when compared to the original print image. To determine the discernibility and effectiveness of tactile images, these customizable boards were tested in various university classrooms as well as in participation studies which included BVI and sighted students. Results from these studies indicate that tactile images are discernable and were found to improve performance in lab exercises as much as 60% for those with visual impairment. Incorporating tactile HDPE 3D images into a classroom setting was shown to increase the interest, participation and performance of BVI students suggesting that this type of 3D tactile image should be incorporated into STEM classes to increase the participation of these students and improve the level of training they receive in science and math.

  2. A Case-Based Study with Radiologists Performing Diagnosis Tasks in Virtual Reality.

    PubMed

    Venson, José Eduardo; Albiero Berni, Jean Carlo; Edmilson da Silva Maia, Carlos; Marques da Silva, Ana Maria; Cordeiro d'Ornellas, Marcos; Maciel, Anderson

    2017-01-01

    In radiology diagnosis, medical images are most often visualized slice by slice. At the same time, the visualization based on 3D volumetric rendering of the data is considered useful and has increased its field of application. In this work, we present a case-based study with 16 medical specialists to assess the diagnostic effectiveness of a Virtual Reality interface in fracture identification over 3D volumetric reconstructions. We developed a VR volume viewer compatible with both the Oculus Rift and handheld-based head mounted displays (HMDs). We then performed user experiments to validate the approach in a diagnosis environment. In addition, we assessed the subjects' perception of the 3D reconstruction quality, ease of interaction and ergonomics, and also the users opinion on how VR applications can be useful in healthcare. Among other results, we have found a high level of effectiveness of the VR interface in identifying superficial fractures on head CTs.

  3. 3D Visualization of Urban Area Using Lidar Technology and CityGML

    NASA Astrophysics Data System (ADS)

    Popovic, Dragana; Govedarica, Miro; Jovanovic, Dusan; Radulovic, Aleksandra; Simeunovic, Vlado

    2017-12-01

    3D models of urban areas have found use in modern world such as navigation, cartography, urban planning visualization, construction, tourism and even in new applications of mobile navigations. With the advancement of technology there are much better solutions for mapping earth’s surface and spatial objects. 3D city model enables exploration, analysis, management tasks and presentation of a city. Urban areas consist of terrain surfaces, buildings, vegetation and other parts of city infrastructure such as city furniture. Nowadays there are a lot of different methods for collecting, processing and publishing 3D models of area of interest. LIDAR technology is one of the most effective methods for collecting data due the large amount data that can be obtained with high density and geometrical accuracy. CityGML is open standard data model for storing alphanumeric and geometry attributes of city. There are 5 levels of display (LoD0, LoD1, LoD2, LoD3, LoD4). In this study, main aim is to represent part of urban area of Novi Sad using LIDAR technology, for data collecting, and different methods for extraction of information’s using CityGML as a standard for 3D representation. By using series of programs, it is possible to process collected data, transform it to CityGML and store it in spatial database. Final product is CityGML 3D model which can display textures and colours in order to give a better insight of the cities. This paper shows results of the first three levels of display. They consist of digital terrain model and buildings with differentiated rooftops and differentiated boundary surfaces. Complete model gives us a realistic view of 3D objects.

  4. Direct Visualization of Orbital Flipping in Volborthite by Charge Density Analysis Using Detwinned Data

    NASA Astrophysics Data System (ADS)

    Sugawara, Kento; Sugimoto, Kunihisa; Fujii, Tatsuya; Higuchi, Takafumi; Katayama, Naoyuki; Okamoto, Yoshihiko; Sawa, Hiroshi

    2018-02-01

    The distribution of d-orbital valence electrons in volborthite [Cu3V2O7(OH)2 • 2H2O] was investigated by charge density analysis of the multipole model refinement. Diffraction data were obtained by synchrotron radiation single-crystal X-ray diffraction experiments. Data reduction by detwinning of the multiple structural domains was performed using our developed software. In this study, using high-quality data, we demonstrated that the water molecules in volborthite can be located by the hydrogen bonding in cavities that consist of Kagome lattice layers of CuO4(OH)2 and pillars of V2O7. Final multipole refinements before and after the structural phase transition directly visualized the deformation electron density of the valence electrons. We successfully directly visualized the orbital flipping of the d-orbital dx2-y2, which is the highest level of 3d orbitals occupied by d9 electrons in volborthite. The developed techniques and software can be employed for investigations of structural properties of systems with multiple structural domains.

  5. Three dimensional fabric evolution of sheared sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hasan, Alsidqi; Alshibli, Khalid

    2012-10-24

    Granular particles undergo translation and rolling when they are sheared. This paper presents a three-dimensional (3D) experimental assessment of fabric evolution of sheared sand at the particle level. F-75 Ottawa sand specimen was tested under an axisymmetric triaxial loading condition. It measured 9.5 mm in diameter and 20 mm in height. The quantitative evaluation was conducted by analyzing 3D high-resolution x-ray synchrotron micro-tomography images of the specimen at eight axial strain levels. The analyses included visualization of particle translation and rotation, and quantification of fabric orientation as shearing continued. Representative individual particles were successfully tracked and visualized to assess themore » mode of interaction between them. This paper discusses fabric evolution and compares the evolution of particles within and outside the shear band as shearing continues. Changes in particle orientation distributions are presented using fabric histograms and fabric tensor.« less

  6. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    NASA Astrophysics Data System (ADS)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  7. AntigenMap 3D: an online antigenic cartography resource.

    PubMed

    Barnett, J Lamar; Yang, Jialiang; Cai, Zhipeng; Zhang, Tong; Wan, Xiu-Feng

    2012-05-01

    Antigenic cartography is a useful technique to visualize and minimize errors in immunological data by projecting antigens to 2D or 3D cartography. However, a 2D cartography may not be sufficient to capture the antigenic relationship from high-dimensional immunological data. AntigenMap 3D presents an online, interactive, and robust 3D antigenic cartography construction and visualization resource. AntigenMap 3D can be applied to identify antigenic variants and vaccine strain candidates for pathogens with rapid antigenic variations, such as influenza A virus. http://sysbio.cvm.msstate.edu/AntigenMap3D

  8. High-Resolution Multibeam Sonar Survey and Interactive 3-D Exploration of the D-Day Wrecks off Normandy

    NASA Astrophysics Data System (ADS)

    Mayer, L. A.; Calder, B.; Schmidt, J. S.

    2003-12-01

    Historically, archaeological investigations use sidescan sonar and marine magnetometers as initial search tools. Targets are then examined through direct observation by divers, video, or photographs. Magnetometers can demonstrate the presence, absence, and relative susceptibility of ferrous objects but provide little indication of the nature of the target. Sidescan sonar can present a clear image of the overall nature of a target and its surrounding environment, but the sidescan image is often distorted and contains little information about the true 3-D shape of the object. Optical techniques allow precise identification of objects but suffer from very limited range, even in the best of situations. Modern high-resolution multibeam sonar offers an opportunity to cover a relatively large area from a safe distance above the target, while resolving the true three-dimensional (3-D) shape of the object with centimeter-level resolution. The combination of 3-D mapping and interactive 3-D visualization techniques provides a powerful new means to explore underwater artifacts. A clear demonstration of the applicability of high-resolution multibeam sonar to wreck and artifact investigations occurred when the Naval Historical Center (NHC), the Center for Coastal and Ocean Mapping (CCOM) at the University of New Hampshire, and Reson Inc., collaborated to explore the state of preservation and impact on the surrounding environment of a series of wrecks located off the coast of Normandy, France, adjacent to the American landing sectors The survey augmented previously collected magnetometer and high-resolution sidescan sonar data using a Reson 8125 high-resolution focused multibeam sonar with 240, 0.5° (at nadir) beams distributed over a 120° swath. The team investigated 21 areas in water depths ranging from about three -to 30 meters (m); some areas contained individual targets such as landing craft, barges, a destroyer, troop carrier, etc., while others contained multiple smaller targets such as tanks and trucks. Of particular interest were the well-preserved caissons and blockships of the artificial Mulberry Harbor deployed off Omaha Beach. The near-field beam-forming capability of the Reson 8125 combined with 3-D visualization techniques provided an unprecedented level of detail including the ability to recognize individual components of the wrecks (ramps, gun turrets, hatches, etc.), the state of preservation of the wrecks, and the impact of the wrecks on the surrounding seafloor. Visualization of these data on the GeoWall allows us to share the exploration of these important historical artifacts with both experts and the general public.

  9. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  10. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery

    PubMed Central

    Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-01

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope’s field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a “perfect” visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon’s armamentarium. PMID:29581920

  11. First Clinical Applications of a High-Definition Three-Dimensional Exoscope in Pediatric Neurosurgery.

    PubMed

    Beez, Thomas; Munoz-Bendix, Christopher; Beseoglu, Kerim; Steiger, Hans-Jakob; Ahmadi, Sebastian A

    2018-01-24

    The ideal visualization tools in microneurosurgery should provide magnification, illumination, wide fields of view, ergonomics, and unobstructed access to the surgical field. The operative microscope was the predominant innovation in modern neurosurgery. Recently, a high-definition three-dimensional (3D) exoscope was developed. We describe the first applications in pediatric neurosurgery. The VITOM 3D exoscope (Karl Storz GmbH, Tuttlingen, Germany) was used in pediatric microneurosurgical operations, along with an OPMI PENTERO operative microscope (Carl Zeiss AG, Jena, Germany). Experiences were retrospectively evaluated with five-level Likert items regarding ease of preparation, image definition, magnification, illumination, field of view, ergonomics, accessibility of the surgical field, and general user-friendliness. Three operations were performed: supratentorial open biopsy in the supine position, infratentorial brain tumor resection in the park bench position, and myelomeningocele closure in the prone position. While preparation and image definition were rated equal for microscope and exoscope, the microscope's field of view, illumination, and user-friendliness were considered superior, while the advantages of the exoscope were seen in ergonomics and the accessibility of the surgical field. No complications attributed to visualization mode occurred. In our experience, the VITOM 3D exoscope is an innovative visualization tool with advantages over the microscope in ergonomics and the accessibility of the surgical field. However, improvements were deemed necessary with regard to field of view, illumination, and user-friendliness. While the debate of a "perfect" visualization modality is influenced by personal preference, this novel visualization device has the potential to become a valuable tool in the neurosurgeon's armamentarium.

  12. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  13. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  14. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  15. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  16. Calcium dobesilate reduces endothelin-1 and high-sensitivity C-reactive protein serum levels in patients with diabetic retinopathy

    PubMed Central

    Javadzadeh, Alireza; Ghorbanihaghjo, Amir; Adl, Farzad Hami; Andalib, Dima; Khojasteh-Jafari, Hassan

    2013-01-01

    Purpose To determine the benefits of calcium dobesilate (CaD) administration on endothelial function and inflammatory status in patients with diabetic retinopathy through measurement of serum levels of endothelin-1 and high-sensitivity C-reactive protein (hsCRP). Methods In a double-blind, randomized clinical trial, 90 patients with either severe nonproliferative or proliferative diabetic retinopathy and with blood glucose level of 120–200 mg/dl were randomly allocated to treatment with either CaD tablets (500 mg daily) or placebo for 3 months. Visual acuity, intraocular pressure, and macular status were performed before the study. The serum levels of endothelin-1 and hsCRP were evaluated in both groups before and at the third month of the trial. Results The median serum level of hsCRP significantly differed between the groups 3 months following the CaD or placebo administration (2.2 mg/l in the CaD group versus 3.7 mg/l in the placebo group, p=0.01). The mean endothelin-1 serum level was 0.69±0.32 pg/ml in the CaD group and 0.86±0.30 pg/ml in the placebo group (p=0.01). Furthermore, in the CaD group, the serum levels of both endothelin-1 and hsCRP were significantly decreased 3 months after administration of CaD (p<0.001). Conclusions Administration of the CaD in the patients with diabetic retinopathy may reduce the serum levels of endothelin-1 and hsCRP. This might imply amelioration of the endothelial function and inflammatory status following CaD therapy in these patients. PMID:23335852

  17. The Impact of Change in Visual Field on Health-Related Quality of Life: The Los Angeles Latino Eye Study

    PubMed Central

    Patino, Cecilia M.; Varma, Rohit; Azen, Stanley P.; Conti, David V.; Nichol, Michael B.; McKean-Cowdin, Roberta

    2010-01-01

    Purpose To assess the impact of change in visual field (VF) on change in health related quality of life (HRQoL) at the population level. Design Prospective cohort study Participants 3,175 Los Angles Latino Eye Study (LALES) participants Methods Objective measures of VF and visual acuity and self-reported HRQoL were collected at baseline and 4-year follow-up. Analysis of covariance was used to evaluate mean differences in change of HRQoL across severity levels of change in VF and to test for effect modification by covariates. Main outcome measures General and vision-specific HRQoL. Results Of 3,175 participants, 1430 (46%) showed a change in VF (≥1 decibel [dB]) and 1651, 1715 (54%) reported a clinically important change (≥5 points) in vision-specific HRQoL. Progressive worsening and improvement in the VF were associated with increasing losses and gains in vision-specific HRQoL for the composite score and 10 of its 11 subscales (all Ptrends<0.05). Losses in VF > 5 dB and gains > 3 dB were associated with clinically meaningful losses and gains in vision-specific HRQoL, respectively. Areas of vision-specific HRQoL most affected by greater losses in VF were driving, dependency, role-functioning, and mental health. The effect of change in VF (loss or gain) on mean change in vision-specific HRQoL varied by level of baseline vision loss (in visual field and/or visual acuity) and by change in visual acuity (all P-interactions<0.05). Those with moderate/severe VF loss at baseline and with a > 5 dB loss in visual field during the study period had a mean loss of vision-specific HRQoL of 11.3 points, while those with no VF loss at baseline had a mean loss of 0.97 points Similarly, with a > 5 dB loss in VF and baseline visual acuity impairment (mild/severe) there was a loss in vision-specific HRQoL of 10.5 points, whereas with no visual acuity impairment at baseline there was a loss of vision-specific HRQoL of 3.7 points. Conclusion Both losses and gains in VF produce clinically meaningful changes in vision-specific HRQoL. In the presence of pre-existing vision loss (VF and visual acuity), similar levels of visual field change produce greater losses in quality of life. PMID:21458074

  18. High Performance Computing and Cutting-Edge Analysis Can Open New

    Science.gov Websites

    Realms March 1, 2018 Two people looking at a 3D interactive graphical data the Visualization Center in capabilities to visualize complex, 3D images of the wakes from multiple wind turbines so that we can better

  19. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  20. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  1. Visualization of Stereoscopic Anatomic Models of the Paranasal Sinuses and Cervical Vertebrae from the Surgical and Procedural Perspective

    ERIC Educational Resources Information Center

    Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei

    2017-01-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…

  2. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  3. A Comparative Analysis of 2D and 3D Tasks for Virtual Reality Therapies Based on Robotic-Assisted Neurorehabilitation for Post-stroke Patients

    PubMed Central

    Lledó, Luis D.; Díez, Jorge A.; Bertomeu-Motos, Arturo; Ezquerro, Santiago; Badesa, Francisco J.; Sabater-Navarro, José M.; García-Aracil, Nicolás

    2016-01-01

    Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length, or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding the success rates were very similar. In conclusion, the using of 2D environments in virtual therapy may be a more appropriate and comfortable way to perform tasks for upper limb rehabilitation of post-stroke patients, in terms of accuracy in order to effectuate optimal kinematic trajectories. PMID:27616992

  4. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan Universität Leoben, Slovenian National Building and Civil Engineering Institute, Tallinn University of Technology and Turku University. The infrastructure within the network comprises different types of capturing and visualization hardware, ranging from high resolution cubes, VR walls, VR goggle solutions, high resolution photogrammetry, UAVs, lidar-scanners, and many more.

  5. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  6. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    NASA Astrophysics Data System (ADS)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  7. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  8. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  9. Digital relief generation from 3D models

    NASA Astrophysics Data System (ADS)

    Wang, Meili; Sun, Yu; Zhang, Hongming; Qian, Kun; Chang, Jian; He, Dongjian

    2016-09-01

    It is difficult to extend image-based relief generation to high-relief generation, as the images contain insufficient height information. To generate reliefs from three-dimensional (3D) models, it is necessary to extract the height fields from the model, but this can only generate bas-reliefs. To overcome this problem, an efficient method is proposed to generate bas-reliefs and high-reliefs directly from 3D meshes. To produce relief features that are visually appropriate, the 3D meshes are first scaled. 3D unsharp masking is used to enhance the visual features in the 3D mesh, and average smoothing and Laplacian smoothing are implemented to achieve better smoothing results. A nonlinear variable scaling scheme is then employed to generate the final bas-reliefs and high-reliefs. Using the proposed method, relief models can be generated from arbitrary viewing positions with different gestures and combinations of multiple 3D models. The generated relief models can be printed by 3D printers. The proposed method provides a means of generating both high-reliefs and bas-reliefs in an efficient and effective way under the appropriate scaling factors.

  10. Visualization of electronic density

    DOE PAGES

    Grosso, Bastien; Cooper, Valentino R.; Pine, Polina; ...

    2015-04-22

    An atom’s volume depends on its electronic density. Although this density can only be evaluated exactly for hydrogen-like atoms, there are many excellent numerical algorithms and packages to calculate it for other materials. 3D visualization of charge density is challenging, especially when several molecular/atomic levels are intertwined in space. We explore several approaches to 3D charge density visualization, including the extension of an anaglyphic stereo visualization application based on the AViz package to larger structures such as nanotubes. We will describe motivations and potential applications of these tools for answering interesting questions about nanotube properties.

  11. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  12. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  13. 2D and 3D visualization methods of endoscopic panoramic bladder images

    NASA Astrophysics Data System (ADS)

    Behrens, Alexander; Heisterklaus, Iris; Müller, Yannick; Stehle, Thomas; Gross, Sebastian; Aach, Til

    2011-03-01

    While several mosaicking algorithms have been developed to compose endoscopic images of the internal urinary bladder wall into panoramic images, the quantitative evaluation of these output images in terms of geometrical distortions have often not been discussed. However, the visualization of the distortion level is highly desired for an objective image-based medical diagnosis. Thus, we present in this paper a method to create quality maps from the characteristics of transformation parameters, which were applied to the endoscopic images during the registration process of the mosaicking algorithm. For a global first view impression, the quality maps are laid over the panoramic image and highlight image regions in pseudo-colors according to their local distortions. This illustration supports then surgeons to identify geometrically distorted structures easily in the panoramic image, which allow more objective medical interpretations of tumor tissue in shape and size. Aside from introducing quality maps in 2-D, we also discuss a visualization method to map panoramic images onto a 3-D spherical bladder model. Reference points are manually selected by the surgeon in the panoramic image and the 3-D model. Then the panoramic image is mapped by the Hammer-Aitoff equal-area projection onto the 3-D surface using texture mapping. Finally the textured bladder model can be freely moved in a virtual environment for inspection. Using a two-hemisphere bladder representation, references between panoramic image regions and their corresponding space coordinates within the bladder model are reconstructed. This additional spatial 3-D information thus assists the surgeon in navigation, documentation, as well as surgical planning.

  14. Using Geowall to Promote Undergraduate Research

    NASA Astrophysics Data System (ADS)

    Scec EIT Intern Team; Perry, S.; Jordan, T.

    2003-12-01

    The principal use of our Geowall system is to showcase the 3-D visualizations created by SCEC/EITR (Southern California Earthquake Center/Earthquake Information Technology Research) interns. These visualizations, called LA3D, are devised to educate the public, assist researchers, inspire students, and attract new interns. With the design criteria that LA3D code must be object-oriented and open-source, and that all datasets should be in internet-accessible databases, our interns have made interactive visualizations of southern California's earthquakes, faults, landforms, and other topographic features, that allow unlimited additions of new datasets and map objects. The interns built our Geowall system, and made a unique contribution to the Geowall consortium when they devised a simple way to use Java3D to create and send images to Geowall's projectors. The EIT interns are enormously proud of their accomplishments, and for most, working on LA3D has been the high point of their college careers. Their efforts have become central to testbed development of the system level science that SCEC is orchestrating in its Community Modeling Environment. In addition, SCEC's Communication, Education and Outreach Program uses LA3D on Geowall to communicate concepts about earthquakes and earthquake processes. Then, projecting LA3D on Geowall, it becomes easy to impress students from elementary to high school ages with what can be accomplished if they keep learning math and science. Finally, we bring Geowall to undergraduate research symposia and career-day open houses, to project LA3D and attract additional students to our intern program, which to date has united students in computer science, engineering, geoscience, mathematics, communication, pre-law, and cinema. (Note: distribution copies of LA3D will be available in early 2004.) The Southern California Earthquake Center Earthquake Information Technology Intern Team on this project: Adam Bongarzone, Hunter Francoeur, Lindsay Gordon, Nitin Gupta, Vipin Gupta, Jeff Hoeft, Shalini Jhatakia, Leonard Jimenez, Gideon Juve, Douglas Lam, Jed Link, Gavin Locke, Deepak Mehtani, Bill Paetzke, Nick Palmer, Brandee Pierce, Ryan Prose, Nitin Sharma, Ghunghroo Sinha, Jeremie Smith, Brandon Teel, Robert Weekly, Channing Wong, Jeremy Zechar.

  15. Design and Implementation of High-Performance GIS Dynamic Objects Rendering Engine

    NASA Astrophysics Data System (ADS)

    Zhong, Y.; Wang, S.; Li, R.; Yun, W.; Song, G.

    2017-12-01

    Spatio-temporal dynamic visualization is more vivid than static visualization. It important to use dynamic visualization techniques to reveal the variation process and trend vividly and comprehensively for the geographical phenomenon. To deal with challenges caused by dynamic visualization of both 2D and 3D spatial dynamic targets, especially for different spatial data types require high-performance GIS dynamic objects rendering engine. The main approach for improving the rendering engine with vast dynamic targets relies on key technologies of high-performance GIS, including memory computing, parallel computing, GPU computing and high-performance algorisms. In this study, high-performance GIS dynamic objects rendering engine is designed and implemented for solving the problem based on hybrid accelerative techniques. The high-performance GIS rendering engine contains GPU computing, OpenGL technology, and high-performance algorism with the advantage of 64-bit memory computing. It processes 2D, 3D dynamic target data efficiently and runs smoothly with vast dynamic target data. The prototype system of high-performance GIS dynamic objects rendering engine is developed based SuperMap GIS iObjects. The experiments are designed for large-scale spatial data visualization, the results showed that the high-performance GIS dynamic objects rendering engine have the advantage of high performance. Rendering two-dimensional and three-dimensional dynamic objects achieve 20 times faster on GPU than on CPU.

  16. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  17. Evaluation of perception performance in neck dissection planning using eye tracking and attention landscapes

    NASA Astrophysics Data System (ADS)

    Burgert, Oliver; Örn, Veronika; Velichkovsky, Boris M.; Gessat, Michael; Joos, Markus; Strauß, Gero; Tietjen, Christian; Preim, Bernhard; Hertel, Ilka

    2007-03-01

    Neck dissection is a surgical intervention at which cervical lymph node metastases are removed. Accurate surgical planning is of high importance because wrong judgment of the situation causes severe harm for the patient. Diagnostic perception of radiological images by a surgeon is an acquired skill that can be enhanced by training and experience. To improve accuracy in detecting pathological lymph nodes by newcomers and less experienced professionals, it is essential to understand how surgical experts solve relevant visual and recognition tasks. By using eye tracking and especially the newly-developed attention landscapes visualizations, it could be determined whether visualization options, for example 3D models instead of CT data, help in increasing accuracy and speed of neck dissection planning. Thirteen ORL surgeons with different levels of expertise participated in this study. They inspected different visualizations of 3D models and original CT datasets of patients. Among others, we used scanpath analysis and attention landscapes to interpret the inspection strategies. It was possible to distinguish different patterns of visual exploratory activity. The experienced surgeons exhibited a higher concentration of attention on the limited number of areas of interest and demonstrated less saccadic eye movements indicating a better orientation.

  18. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  19. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography. PMID:25803318

  20. Improved visualization of intracranial vessels with intraoperative coregistration of rotational digital subtraction angiography and intraoperative 3D ultrasound.

    PubMed

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography.

  1. High Resolution Visualization Applied to Future Heavy Airlift Concept Development and Evaluation

    NASA Technical Reports Server (NTRS)

    FordCook, A. B.; King, T.

    2012-01-01

    This paper explores the use of high resolution 3D visualization tools for exploring the feasibility and advantages of future military cargo airlift concepts and evaluating compatibility with existing and future payload requirements. Realistic 3D graphic representations of future airlifters are immersed in rich, supporting environments to demonstrate concepts of operations to key personnel for evaluation, feedback, and development of critical joint support. Accurate concept visualizations are reviewed by commanders, platform developers, loadmasters, soldiers, scientists, engineers, and key principal decision makers at various stages of development. The insight gained through the review of these physically and operationally realistic visualizations is essential to refining design concepts to meet competing requirements in a fiscally conservative defense finance environment. In addition, highly accurate 3D geometric models of existing and evolving large military vehicles are loaded into existing and proposed aircraft cargo bays. In this virtual aircraft test-loading environment, materiel developers, engineers, managers, and soldiers can realistically evaluate the compatibility of current and next-generation airlifters with proposed cargo.

  2. Procedural 3d Modelling for Traditional Settlements. The Case Study of Central Zagori

    NASA Astrophysics Data System (ADS)

    Kitsakis, D.; Tsiliakou, E.; Labropoulos, T.; Dimopoulou, E.

    2017-02-01

    Over the last decades 3D modelling has been a fast growing field in Geographic Information Science, extensively applied in various domains including reconstruction and visualization of cultural heritage, especially monuments and traditional settlements. Technological advances in computer graphics, allow for modelling of complex 3D objects achieving high precision and accuracy. Procedural modelling is an effective tool and a relatively novel method, based on algorithmic modelling concept. It is utilized for the generation of accurate 3D models and composite facade textures from sets of rules which are called Computer Generated Architecture grammars (CGA grammars), defining the objects' detailed geometry, rather than altering or editing the model manually. In this paper, procedural modelling tools have been exploited to generate the 3D model of a traditional settlement in the region of Central Zagori in Greece. The detailed geometries of 3D models derived from the application of shape grammars on selected footprints, and the process resulted in a final 3D model, optimally describing the built environment of Central Zagori, in three levels of Detail (LoD). The final 3D scene was exported and published as 3D web-scene which can be viewed with 3D CityEngine viewer, giving a walkthrough the whole model, same as in virtual reality or game environments. This research work addresses issues regarding textures' precision, LoD for 3D objects and interactive visualization within one 3D scene, as well as the effectiveness of large scale modelling, along with the benefits and drawbacks that derive from procedural modelling techniques in the field of cultural heritage and more specifically on 3D modelling of traditional settlements.

  3. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    NASA Astrophysics Data System (ADS)

    Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.

    1997-05-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.

  4. Visualizing dynamic geosciences phenomena using an octree-based view-dependent LOD strategy within virtual globes

    NASA Astrophysics Data System (ADS)

    Li, Jing; Wu, Huayi; Yang, Chaowei; Wong, David W.; Xie, Jibo

    2011-09-01

    Geoscientists build dynamic models to simulate various natural phenomena for a better understanding of our planet. Interactive visualizations of these geoscience models and their outputs through virtual globes on the Internet can help the public understand the dynamic phenomena related to the Earth more intuitively. However, challenges arise when the volume of four-dimensional data (4D), 3D in space plus time, is huge for rendering. Datasets loaded from geographically distributed data servers require synchronization between ingesting and rendering data. Also the visualization capability of display clients varies significantly in such an online visualization environment; some may not have high-end graphic cards. To enhance the efficiency of visualizing dynamic volumetric data in virtual globes, this paper proposes a systematic framework, in which an octree-based multiresolution data structure is implemented to organize time series 3D geospatial data to be used in virtual globe environments. This framework includes a view-dependent continuous level of detail (LOD) strategy formulated as a synchronized part of the virtual globe rendering process. Through the octree-based data retrieval process, the LOD strategy enables the rendering of the 4D simulation at a consistent and acceptable frame rate. To demonstrate the capabilities of this framework, data of a simulated dust storm event are rendered in World Wind, an open source virtual globe. The rendering performances with and without the octree-based LOD strategy are compared. The experimental results show that using the proposed data structure and processing strategy significantly enhances the visualization performance when rendering dynamic geospatial phenomena in virtual globes.

  5. 3D Imaging of Microbial Biofilms: Integration of Synchrotron Imaging and an Interactive Visualization Interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, Mathew; Marshall, Matthew J.; Miller, Erin A.

    2014-08-26

    Understanding the interactions of structured communities known as “biofilms” and other complex matrixes is possible through the X-ray micro tomography imaging of the biofilms. Feature detection and image processing for this type of data focuses on efficiently identifying and segmenting biofilms and bacteria in the datasets. The datasets are very large and often require manual interventions due to low contrast between objects and high noise levels. Thus new software is required for the effectual interpretation and analysis of the data. This work specifies the evolution and application of the ability to analyze and visualize high resolution X-ray micro tomography datasets.

  6. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  7. Amazing Space: Explanations, Investigations, & 3D Visualizations

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  8. A novel approach to segmentation and measurement of medical image using level set methods.

    PubMed

    Chen, Yao-Tien

    2017-06-01

    The study proposes a novel approach for segmentation and visualization plus value-added surface area and volume measurements for brain medical image analysis. The proposed method contains edge detection and Bayesian based level set segmentation, surface and volume rendering, and surface area and volume measurements for 3D objects of interest (i.e., brain tumor, brain tissue, or whole brain). Two extensions based on edge detection and Bayesian level set are first used to segment 3D objects. Ray casting and a modified marching cubes algorithm are then adopted to facilitate volume and surface visualization of medical-image dataset. To provide physicians with more useful information for diagnosis, the surface area and volume of an examined 3D object are calculated by the techniques of linear algebra and surface integration. Experiment results are finally reported in terms of 3D object extraction, surface and volume rendering, and surface area and volume measurements for medical image analysis. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  10. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  11. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  12. Multi-scale Visualization of Remote Sensing and Topographic Data of the Amazon Rain Forest for Environmental Monitoring of the Petroleum Industry.

    NASA Astrophysics Data System (ADS)

    Fonseca, L.; Miranda, F. P.; Beisl, C. H.; Souza-Fonseca, J.

    2002-12-01

    PETROBRAS (the Brazilian national oil company) built a pipeline to transport crude oil from the Urucu River region to a terminal in the vicinities of Coari, a city located in the right margin of the Solimoes River. The oil is then shipped by tankers to another terminal in Manaus, capital city of the Amazonas state. At the city of Coari, changes in water level between dry and wet seasons reach up to 14 meters. This strong seasonal character of the Amazonian climate gives rise to four distinct scenarios in the annual hydrological cycle: low water, high water, receding water, and rising water. These scenarios constitute the main reference for the definition of oil spill response planning in the region, since flooded forests and flooded vegetation are the most sensitive fluvial environments to oil spills. This study focuses on improving information about oil spill environmental sensitivity in Western Amazon by using 3D visualization techniques to help the analysis and interpretation of remote sensing and digital topographic data, as follows: (a) 1995 low flood and 1996 high flood JERS-1 SAR mosaics, band LHH, 100m pixel; (b) 2000 low flood and 2001 high flood RADARSAT-1 W1 images, band CHH, 30m pixel; (c) 2002 high flood airborne SAR images from the SIVAM project (System for Surveillance of the Amazon), band LHH, 3m pixel and band XHH, 6m pixel; (d) GTOPO30 digital elevation model, 30' resolution; (e) Digital elevation model derived from topographic information acquired during seismic surveys, 25m resolution; (f) panoramic views obtained from low altitude helicopter flights. The methodology applied includes image processing, cartographic conversion and generation of value-added product using 3D visualization. A semivariogram textural classification was applied to the SAR images in order to identify areas of flooded forest and flooded vegetation. The digital elevation models were color shaded to highlight subtle topographic features. Both datasets were then converted to the same cartographic projection and inserted into the Fledermaus 3D visualization environment. 3D visualization proved to be an important aid in understanding the spatial distribution pattern of the environmentally sensitive vegetation cover. The dynamics of the hydrological cycle was depicted in a basin-wide scale, revealing new geomorphic information relevant to assess the environmental risk of oil spills. Results demonstrate that pipelines constitute an environmentally saver option for oil transportation in the region when compared to fluvial tanker routes.

  13. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  14. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  15. Real-time three-dimensional imaging of epidermal splitting and removal by high-definition optical coherence tomography.

    PubMed

    Boone, Marc; Draye, Jean Pierre; Verween, Gunther; Pirnay, Jean-Paul; Verbeken, Gilbert; De Vos, Daniel; Rose, Thomas; Jennes, Serge; Jemec, Gregor B E; Del Marmol, Véronique

    2014-10-01

    While real-time 3-D evaluation of human skin constructs is needed, only 2-D non-invasive imaging techniques are available. The aim of this paper is to evaluate the potential of high-definition optical coherence tomography (HD-OCT) for real-time 3-D assessment of the epidermal splitting and decellularization. Human skin samples were incubated with four different agents: Dispase II, NaCl 1 M, sodium dodecyl sulphate (SDS) and Triton X-100. Epidermal splitting, dermo-epidermal junction, acellularity and 3-D architecture of dermal matrices were evaluated by High-definition optical coherence tomography before and after incubation. Real-time 3-D HD-OCT assessment was compared with 2-D en face assessment by reflectance confocal microscopy (RCM). (Immuno) histopathology was used as control. HD-OCT imaging allowed real-time 3-D visualization of the impact of selected agents on epidermal splitting, dermo-epidermal junction, dermal architecture, vascular spaces and cellularity. RCM has a better resolution (1 μm) than HD-OCT (3 μm), permitting differentiation of different collagen fibres, but HD-OCT imaging has deeper penetration (570 μm) than RCM imaging (200 μm). Dispase II and NaCl treatments were found to be equally efficient in the removal of the epidermis from human split-thickness skin allografts. However, a different epidermal splitting level at the dermo-epidermal junction could be observed and confirmed by immunolabelling of collagen type IV and type VII. Epidermal splitting occurred at the level of the lamina densa with dispase II and above the lamina densa (in the lamina lucida) with NaCl. The 3-D architecture of dermal papillae and dermis was more affected by Dispase II on HD-OCT which corresponded with histopathologic (orcein staining) fragmentation of elastic fibres. With SDS treatment, the epidermal removal was incomplete as remnants of the epidermal basal cell layer remained attached to the basement membrane on the dermis. With Triton X-100 treatment, the epidermis was not removed. In conclusion, HD-OCT imaging permits real-time 3-D visualization of the impact of selected agents on human skin allografts. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Understanding the contributions of visual stimuli to contextual fear conditioning: A proof-of-concept study using LCD screens.

    PubMed

    Murawski, Nathen J; Asok, Arun

    2017-01-10

    The precise contribution of visual information to contextual fear learning and discrimination has remained elusive. To better understand this contribution, we coupled the context pre-exposure facilitation effect (CPFE) fear conditioning paradigm with presentations of distinct visual scenes displayed on 4 LCD screens surrounding a conditioning chamber. Adult male Long-Evans rats received non-reinforced context pre-exposure on Day 1, an immediate 1.5mA foot shock on Day 2, and a non-reinforced context test on Day 3. Rats were pre-exposed to either digital Context (dCtx) A, dCtx B, a distinct Ctx C, or no context on Day 1. Digital context A and B were identical except for the visual image displayed on the LCD screens. Immediate shock and retention testing occurred in dCtx A. Rats pre-exposed dCtx A showed the CPFE with significantly higher levels of freezing compared to controls. Rats pre-exposed to Context B failed to show the CPFE, with freezing that did not highly differ from controls. These results suggest that visual information contributes to contextual fear learning and that visual components of the context can be manipulated via LCD screens. Our approach offers a simple modification to contextual fear conditioning paradigms whereby the visual features of a context can be manipulated to better understand the factors that contribute to contextual fear discrimination and generalization. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  17. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  18. Immersive Visual Analytics for Transformative Neutron Scattering Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Daniel, Jamison R; Drouhard, Margaret

    The ORNL Spallation Neutron Source (SNS) provides the most intense pulsed neutron beams in the world for scientific research and development across a broad range of disciplines. SNS experiments produce large volumes of complex data that are analyzed by scientists with varying degrees of experience using 3D visualization and analysis systems. However, it is notoriously difficult to achieve proficiency with 3D visualizations. Because 3D representations are key to understanding the neutron scattering data, scientists are unable to analyze their data in a timely fashion resulting in inefficient use of the limited and expensive SNS beam time. We believe a moremore » intuitive interface for exploring neutron scattering data can be created by combining immersive virtual reality technology with high performance data analytics and human interaction. In this paper, we present our initial investigations of immersive visualization concepts as well as our vision for an immersive visual analytics framework that could lower the barriers to 3D exploratory data analysis of neutron scattering data at the SNS.« less

  19. Fall Prevention Self-Assessments Via Mobile 3D Visualization Technologies: Community Dwelling Older Adults' Perceptions of Opportunities and Challenges.

    PubMed

    Hamm, Julian; Money, Arthur; Atwal, Anita

    2017-06-19

    In the field of occupational therapy, the assistive equipment provision process (AEPP) is a prominent preventive strategy used to promote independent living and to identify and alleviate fall risk factors via the provision of assistive equipment within the home environment. Current practice involves the use of paper-based forms that include 2D measurement guidance diagrams that aim to communicate the precise points and dimensions that must be measured in order to make AEPP assessments. There are, however, issues such as "poor fit" of equipment due to inaccurate measurements taken and recorded, resulting in more than 50% of equipment installed within the home being abandoned by patients. This paper presents a novel 3D measurement aid prototype (3D-MAP) that provides enhanced measurement and assessment guidance to patients via the use of 3D visualization technologies. The purpose of this study was to explore the perceptions of older adults with regard to the barriers and opportunities of using the 3D-MAP application as a tool that enables patient self-delivery of the AEPP. Thirty-three community-dwelling older adults participated in interactive sessions with a bespoke 3D-MAP application utilizing the retrospective think-aloud protocol and semistructured focus group discussions. The system usability scale (SUS) questionnaire was used to evaluate the application's usability. Thematic template analysis was carried out on the SUS item discussions, think-aloud, and semistructured focus group data. The quantitative SUS results revealed that the application may be described as having "marginal-high" and "good" levels of usability, along with strong agreement with items relating to the usability (P=.004) and learnability (P<.001) of the application. Four high-level themes emerged from think-aloud and focus groups discussions: (1) perceived usefulness (PU), (2) perceived ease of use (PEOU), (3) application use (AU) and (4) self-assessment (SA). The application was seen as a useful tool to enhance visualization of measurement guidance and also to promote independent living, ownership of care, and potentially reduce waiting times. Several design and functionality recommendations emerged from the study, such as a need to manipulate the view and position of the 3D furniture models, and a need for clearer visual prompts and alternative keyboard interface for measurement entry. Participants perceived the 3D-MAP application as a useful tool that has the potential to make significant improvements to the AEPP, not only in terms of accuracy of measurement, but also by potentially enabling older adult patients to carry out the data collection element of the AEPP themselves. Further research is needed to further adapt the 3D-MAP application in line with the study outcomes and to establish its clinical utility with regards to effectiveness, efficiency, accuracy, and reliability of measurements that are recorded using the application and to compare it with 2D measurement guidance leaflets. ©Julian Hamm, Arthur Money, Anita Atwal. Originally published in JMIR Human Factors (http://humanfactors.jmir.org), 19.06.2017.

  20. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    PubMed

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  1. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  2. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  3. Evaluation of MR scanning, image registration, and image processing methods to visualize cortical veins for neurosurgery

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; Rutten, G. J. M.; Willems, Peter W. A.; Viergever, Max A.

    2000-04-01

    The visualization of brain vessels on the cortex helps the neurosurgeon in two ways: to avoid blood vessels when specifying the trepanation entry, and to overcome errors in the surgical navigation system due to brain shift. We compared 3D T1, MR, 3D T1 MR with gadolinium contrast, MR venography as scanning techniques, mutual information as registration technique, and thresholding and multi-vessel enhancement as image processing techniques. We evaluated the volume rendered results based on their quality and correspondence with photos took during surgery. It appears that with 3D T1 MR scans, gadolinium is required to show cortical veins. The visibility of small cortical veins is strongly enhanced by subtracting a 3D T1 MR baseline scan, which should be registered to the scan with gadolinium contrast, even when the scans are made during the same session. Multi-vessel enhancement helps to clarify the view on small vessels by reducing noise level, but strikingly does not reveal more. MR venography does show intracerebral veins with high detail, but is, as is, unsuited to show cortical veins due to the low contrast with CSF.

  4. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    PubMed

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  5. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  6. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    ERIC Educational Resources Information Center

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  7. Data Cube Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Gárate, Matías

    2017-06-01

    With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.

  8. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  9. Illustrating Surface Shape in Volume Data via Principal Direction-Driven 3D Line Integral Convolution

    NASA Technical Reports Server (NTRS)

    Interrante, Victoria

    1997-01-01

    The three-dimensional shape and relative depth of a smoothly curving layered transparent surface may be communicated particularly effectively when the surface is artistically enhanced with sparsely distributed opaque detail. This paper describes how the set of principal directions and principal curvatures specified by local geometric operators can be understood to define a natural 'flow' over the surface of an object, and can be used to guide the placement of the lines of a stroke texture that seeks to represent 3D shape information in a perceptually intuitive way. The driving application for this work is the visualization of layered isovalue surfaces in volume data, where the particular identity of an individual surface is not generally known a priori and observers will typically wish to view a variety of different level surfaces from the same distribution, superimposed over underlying opaque structures. By advecting an evenly distributed set of tiny opaque particles, and the empty space between them, via 3D line integral convolution through the vector field defined by the principal directions and principal curvatures of the level surfaces passing through each gridpoint of a 3D volume, it is possible to generate a single scan-converted solid stroke texture that may intuitively represent the essential shape information of any level surface in the volume. To generate longer strokes over more highly curved areas, where the directional information is both most stable and most relevant, and to simultaneously downplay the visual impact of directional information in the flatter regions, one may dynamically redefine the length of the filter kernel according to the magnitude of the maximum principal curvature of the level surface at the point around which it is applied.

  10. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  11. Localization of the mRNA for the dopamine D sub 2 receptor in the rat brain by in situ hybridization histochemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mengod, G.; Martinez-Mir, M.I.; Vilaro, M.T.

    1989-11-01

    {sup 32}P-labeled oligonucleotides derived from the coding region of rat dopamine D{sub 2} receptor cDNA were used as probes to localize cells in the rat brain that contain the mRNA coding for this receptor by using in situ hybridization histochemistry. The highest level of hybridization was found in the intermediate lobe of the pituitary gland. High mRNA content was observed in the anterior lobe of the pituitary gland, the nuclei caudate-putamen and accumbens, and the olfactory tubercle. Lower levels were seen in the substantia nigra pars compacta and the ventral tegmental area, as well as in the lateral mammillary body.more » In these areas the distribution was comparable to that of the dopamine D{sub 2} receptor binding sites as visualized by autoradiography using ({sup 3}H)SDZ 205-502 as a ligand. However, in some areas such as the olfactory bulb, neocortex, hippocampus, superior colliculus, and cerebellum, D{sub 2} receptors have been visualized but no significant hybridization signal could be detected. The mRNA coding for these receptors in these areas could be contained in cells outside those brain regions, be different from the one recognized by our probes, or be present at levels below the detection limits of our procedure. The possibility of visualizing and quantifying the mRNA coding for dopamine D{sub 2} receptor at the microscopic level will yield more information about the in vivo regulation of the synthesis of these receptor and their alteration following selective lesions or drug treatments.« less

  12. Applying Open Source Game Engine for Building Visual Simulation Training System of Fire Fighting

    NASA Astrophysics Data System (ADS)

    Yuan, Diping; Jin, Xuesheng; Zhang, Jin; Han, Dong

    There's a growing need for fire departments to adopt a safe and fair method of training to ensure that the firefighting commander is in a position to manage a fire incident. Visual simulation training systems, with their ability to replicate and interact with virtual fire scenarios through the use of computer graphics or VR, become an effective and efficient method for fire ground education. This paper describes the system architecture and functions of a visual simulated training system of fire fighting on oil storage, which adopting Delat3D, a open source game and simulation engine, to provide realistic 3D views. It presents that using open source technology provides not only the commercial-level 3D effects but also a great reduction of cost.

  13. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  14. Visual selection and maintenance of the cell lines with high plant regeneration ability and low ploidy level in Dianthus acicularis by monitoring with flow cytometry analysis.

    PubMed

    Shiba, Tomonori; Mii, Masahiro

    2005-12-01

    Efficient plant regeneration system from cell suspension cultures was established in D. acicularis (2n=90) by monitoring ploidy level and visual selection of the cultures. The ploidy level of the cell cultures closely related to the shoot regeneration ability. The cell lines comprising original ploidy levels (2C+4C cells corresponding to DNA contents of G1 and G2 cells of diploid plant, respectively) showed high regeneration ability, whereas those containing the cells with 8C or higher DNA C-values showed low or no regeneration ability. The highly regenerable cell lines thus selected consisted of compact cell clumps with yellowish color and relatively moderate growth, suggesting that it is possible to select visually the highly regenerable cell lines with the original ploidy level. All the regenerated plantlets from the highly regenerable cell cultures exhibited normal phenotypes and no variations in ploidy level were observed by flow cytometry (FCM) analysis.

  15. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  16. Comparative evaluation of monocular augmented-reality display for surgical microscopes.

    PubMed

    Rodriguez Palma, Santiago; Becker, Brian C; Lobes, Louis A; Riviere, Cameron N

    2012-01-01

    Medical augmented reality has undergone much development recently. However, there is a lack of studies quantitatively comparing the different display options available. This paper compares the effects of different graphical overlay systems in a simple micromanipulation task with "soft" visual servoing. We compared positioning accuracy in a real-time visually-guided task using Micron, an active handheld tremor-canceling microsurgical instrument, using three different displays: 2D screen, 3D screen, and microscope with monocular image injection. Tested with novices and an experienced vitreoretinal surgeon, display of virtual cues in the microscope via an augmented reality injection system significantly decreased 3D error (p < 0.05) compared to the 2D and 3D monitors when confounding factors such as magnification level were normalized.

  17. High-resolution gadolinium-enhanced 3D MRA of the infrapopliteal arteries. Lessons for improving bolus-chase peripheral MRA.

    PubMed

    Hood, Maureen N; Ho, Vincent B; Foo, Thomas K F; Marcos, Hani B; Hess, Sandra L; Choyke, Peter L

    2002-09-01

    Peripheral magnetic resonance angiography (MRA) is growing in use. However, methods of performing peripheral MRA vary widely and continue to be optimized, especially for improvement in illustration of infrapopliteal arteries. The main purpose of this project was to identify imaging factors that can improve arterial visualization in the lower leg using bolus chase peripheral MRA. Eighteen healthy adults were imaged on a 1.5T MR scanner. The calf was imaged using conventional three-station bolus chase three-dimensional (3D) MRA, two dimensional (2D) time-of-flight (TOF) MRA and single-station Gadolinium (Gd)-enhanced 3D MRA. Observer comparisons of vessel visualization, signal to noise ratios (SNR), contrast to noise ratios (CNR) and spatial resolution comparisons were performed. Arterial SNR and CNR were similar for all three techniques. However, arterial visualization was dramatically improved on dedicated, arterial-phase Gd-enhanced 3D MRA compared with the multi-station bolus chase MRA and 2D TOF MRA. This improvement was related to optimization of Gd-enhanced 3D MRA parameters (fast injection rate of 2 mL/sec, high spatial resolution imaging, the use of dedicated phased array coils, elliptical centric k-space sampling and accurate arterial phase timing for image acquisition). The visualization of the infrapopliteal arteries can be substantially improved in bolus chase peripheral MRA if voxel size, contrast delivery, and central k-space data acquisition for arterial enhancement are optimized. Improvements in peripheral MRA should be directed at these parameters.

  18. Study of capabilities and limitations of 3D printing technology

    NASA Astrophysics Data System (ADS)

    Lemu, H. G.

    2012-04-01

    3D printing is one of the developments in rapid prototyping technology. The inception and development of the technology has highly assisted the product development phase of product design and manufacturing. The technology is particularly important in educating product design and 3D modeling because it helps students to visualize their design idea, to enhance their creative design process and enables them to touch and feel the result of their innovative work. The availability of many 3D printers on the market has created a certain level of challenge for the user. Among others, complexity of part geometry, material type, compatibility with 3D CAD models and other technical aspects still need in-depth study. This paper presents results of the experimental work on the capabilities and limitations of the Z510 3D printer from Z-corporation. Several parameters such as dimensional and geometrical accuracy, surface quality and strength as a function of model size, orientation and file exchange format are closely studied.

  19. 360-degree 3D transvaginal ultrasound system for high-dose-rate interstitial gynaecological brachytherapy needle guidance

    NASA Astrophysics Data System (ADS)

    Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron

    2017-03-01

    Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.

  20. Do's and don'ts of cryo-electron microscopy: a primer on sample preparation and high quality data collection for macromolecular 3D reconstruction.

    PubMed

    Cabra, Vanessa; Samsó, Montserrat

    2015-01-09

    Cryo-electron microscopy (cryoEM) entails flash-freezing a thin layer of sample on a support, and then visualizing the sample in its frozen hydrated state by transmission electron microscopy (TEM). This can be achieved with very low quantity of protein and in the buffer of choice, without the use of any stain, which is very useful to determine structure-function correlations of macromolecules. When combined with single-particle image processing, the technique has found widespread usefulness for 3D structural determination of purified macromolecules. The protocol presented here explains how to perform cryoEM and examines the causes of most commonly encountered problems for rational troubleshooting; following all these steps should lead to acquisition of high quality cryoEM images. The technique requires access to the electron microscope instrument and to a vitrification device. Knowledge of the 3D reconstruction concepts and software is also needed for computerized image processing. Importantly, high quality results depend on finding the right purification conditions leading to a uniform population of structurally intact macromolecules. The ability of cryoEM to visualize macromolecules combined with the versatility of single particle image processing has proven very successful for structural determination of large proteins and macromolecular machines in their near-native state, identification of their multiple components by 3D difference mapping, and creation of pseudo-atomic structures by docking of x-ray structures. The relentless development of cryoEM instrumentation and image processing techniques for the last 30 years has resulted in the possibility to generate de novo 3D reconstructions at atomic resolution level.

  1. Visual perceptual load reduces auditory detection in typically developing individuals but not in individuals with autism spectrum disorders.

    PubMed

    Tillmann, Julian; Swettenham, John

    2017-02-01

    Previous studies examining selective attention in individuals with autism spectrum disorder (ASD) have yielded conflicting results, some suggesting superior focused attention (e.g., on visual search tasks), others demonstrating greater distractibility. This pattern could be accounted for by the proposal (derived by applying the Load theory of attention, e.g., Lavie, 2005) that ASD is characterized by an increased perceptual capacity (Remington, Swettenham, Campbell, & Coleman, 2009). Recent studies in the visual domain support this proposal. Here we hypothesize that ASD involves an enhanced perceptual capacity that also operates across sensory modalities, and test this prediction, for the first time using a signal detection paradigm. Seventeen neurotypical (NT) and 15 ASD adolescents performed a visual search task under varying levels of visual perceptual load while simultaneously detecting presence/absence of an auditory tone embedded in noise. Detection sensitivity (d') for the auditory stimulus was similarly high for both groups in the low visual perceptual load condition (e.g., 2 items: p = .391, d = 0.31, 95% confidence interval [CI] [-0.39, 1.00]). However, at a higher level of visual load, auditory d' reduced for the NT group but not the ASD group, leading to a group difference (p = .002, d = 1.2, 95% CI [0.44, 1.96]). As predicted, when visual perceptual load was highest, both groups then showed a similarly low auditory d' (p = .9, d = 0.05, 95% CI [-0.65, 0.74]). These findings demonstrate that increased perceptual capacity in ASD operates across modalities. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Profile of students' comprehension of 3D molecule representation and its interconversion on chirality

    NASA Astrophysics Data System (ADS)

    Setyarini, M.; Liliasari, Kadarohman, Asep; Martoprawiro, Muhamad A.

    2016-02-01

    This study aims at describing (1) students' level comprehension; (2) factors causing difficulties to 3D comprehend molecule representation and its interconversion on chirality. Data was collected using multiple-choice test consisting of eight questions. The participants were required to give answers along with their reasoning. The test was developed based on the indicators of concept comprehension. The study was conducted to 161 college students enrolled in stereochemistry topic in the odd semester (2014/2015) from two LPTK (teacher training institutes) in Bandar Lampung and Gorontalo, and one public university in Bandung. The result indicates that college students' level of comprehension towards 3D molecule representations and its inter-conversion was 5% on high level, 22 % on the moderate level, and 73 % on the low level. The dominant factors identified as the cause of difficulties to comprehend 3D molecule representation and its interconversion were (i) the lack of spatial awareness, (ii) violation of absolute configuration determination rules, (iii) imprecise placement of observers, (iv) the lack of rotation operation, and (v) the lack of understanding of correlation between the representations. This study recommends that learning show more rigorous spatial awareness training tasks accompanied using dynamic visualization media of molecules associated. Also students learned using static molecular models can help them overcome their difficulties encountered.

  3. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  4. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm). The comparison test is carried out in Bentley environment to check the best possible results obtained through operating different batch processes.

  5. Entrainment and high-density three-dimensional mapping in right atrial macroreentry provide critical complementary information: Entrainment may unmask "visual reentry" as passive.

    PubMed

    Pathik, Bhupesh; Lee, Geoffrey; Nalliah, Chrishan; Joseph, Stephen; Morton, Joseph B; Sparks, Paul B; Sanders, Prashanthan; Kistler, Peter M; Kalman, Jonathan M

    2017-10-01

    With the recent advent of high-density (HD) 3-dimensional (3D) mapping, the utility of entrainment is uncertain. However, the limitations of visual representation and interpretation of these high-resolution 3D maps are unclear. The purpose of this study was to determine the strengths and limitations of both HD 3D mapping and entrainment mapping during mapping of right atrial macroreentry. Fifteen patients were studied. The number and type of circuits accounting for ≥90% of the tachycardia cycle length using HD 3D mapping were verified using systematic entrainment mapping. Entrainment sites with an unexpectedly long postpacing interval despite proximity to the active circuit were evaluated. Based on HD 3D mapping, 27 circuits were observed: 12 peritricuspid, 2 upper loop reentry, 10 lower loop reentry, and 3 lateral wall circuits. With entrainment, 17 of the 27 circuits were active: all 12 peritricuspid and 2 upper loop reentry. However, lower loop reentry was confirmed in only 3 of 10, and none of the 3 lateral wall circuits were present. Mean percentage of tachycardia cycle length covered by active circuits was 98% ± 1% vs 97% ± 2% for passive circuits (P = .09). None of the 345 entrainment runs terminated tachycardia or changed tachycardia mechanism. In 8 of 15 patients, 13 examples of unexpectedly long postpacing interval were observed at entrainment sites located distal to localized zones of slow conduction seen on HD 3D mapping. Using HD 3D mapping, "visual reentry" may be due to passive circuitous propagation rather than a critical reentrant circuit. HD 3D mapping provides new insights into regional conduction and helps explain unusual entrainment phenomena. Copyright © 2017 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.

  6. Modeling of Aerosol Vertical Profiles Using GIS and Remote Sensing

    PubMed Central

    Wong, Man Sing; Nichol, Janet E.; Lee, Kwon Ho

    2009-01-01

    The use of Geographic Information Systems (GIS) and Remote Sensing (RS) by climatologists, environmentalists and urban planners for three dimensional modeling and visualization of the landscape is well established. However no previous study has implemented these techniques for 3D modeling of atmospheric aerosols because air quality data is traditionally measured at ground points, or from satellite images, with no vertical dimension. This study presents a prototype for modeling and visualizing aerosol vertical profiles over a 3D urban landscape in Hong Kong. The method uses a newly developed technique for the derivation of aerosol vertical profiles from AERONET sunphotometer measurements and surface visibility data, and links these to a 3D urban model. This permits automated modeling and visualization of aerosol concentrations at different atmospheric levels over the urban landscape in near-real time. Since the GIS platform permits presentation of the aerosol vertical distribution in 3D, it can be related to the built environment of the city. Examples are given of the applications of the model, including diagnosis of the relative contribution of vehicle emissions to pollution levels in the city, based on increased near-surface concentrations around weekday rush-hour times. The ability to model changes in air quality and visibility from ground level to the top of tall buildings is also demonstrated, and this has implications for energy use and environmental policies for the tall mega-cities of the future. PMID:22408531

  7. Modeling of Aerosol Vertical Profiles Using GIS and Remote Sensing.

    PubMed

    Wong, Man Sing; Nichol, Janet E; Lee, Kwon Ho

    2009-01-01

    The use of Geographic Information Systems (GIS) and Remote Sensing (RS) by climatologists, environmentalists and urban planners for three dimensional modeling and visualization of the landscape is well established. However no previous study has implemented these techniques for 3D modeling of atmospheric aerosols because air quality data is traditionally measured at ground points, or from satellite images, with no vertical dimension. This study presents a prototype for modeling and visualizing aerosol vertical profiles over a 3D urban landscape in Hong Kong. The method uses a newly developed technique for the derivation of aerosol vertical profiles from AERONET sunphotometer measurements and surface visibility data, and links these to a 3D urban model. This permits automated modeling and visualization of aerosol concentrations at different atmospheric levels over the urban landscape in near-real time. Since the GIS platform permits presentation of the aerosol vertical distribution in 3D, it can be related to the built environment of the city. Examples are given of the applications of the model, including diagnosis of the relative contribution of vehicle emissions to pollution levels in the city, based on increased near-surface concentrations around weekday rush-hour times. The ability to model changes in air quality and visibility from ground level to the top of tall buildings is also demonstrated, and this has implications for energy use and environmental policies for the tall mega-cities of the future.

  8. Overview of EVE - the event visualization environment of ROOT

    NASA Astrophysics Data System (ADS)

    Tadel, Matevž

    2010-04-01

    EVE is a high-level visualization library using ROOT's data-processing, GUI and OpenGL interfaces. It is designed as a framework for object management offering hierarchical data organization, object interaction and visualization via GUI and OpenGL representations. Automatic creation of 2D projected views is also supported. On the other hand, it can serve as an event visualization toolkit satisfying most HEP requirements: visualization of geometry, simulated and reconstructed data such as hits, clusters, tracks and calorimeter information. Special classes are available for visualization of raw-data. Object-interaction layer allows for easy selection and highlighting of objects and their derived representations (projections) across several views (3D, Rho-Z, R-Phi). Object-specific tooltips are provided in both GUI and GL views. The visual-configuration layer of EVE is built around a data-base of template objects that can be applied to specific instances of visualization objects to ensure consistent object presentation. The data-base can be retrieved from a file, edited during the framework operation and stored to file. EVE prototype was developed within the ALICE collaboration and has been included into ROOT in December 2007. Since then all EVE components have reached maturity. EVE is used as the base of AliEve visualization framework in ALICE, Firework physics-oriented event-display in CMS, and as the visualization engine of FairRoot in FAIR.

  9. Fall Prevention Self-Assessments Via Mobile 3D Visualization Technologies: Community Dwelling Older Adults’ Perceptions of Opportunities and Challenges

    PubMed Central

    Hamm, Julian; Atwal, Anita

    2017-01-01

    Background In the field of occupational therapy, the assistive equipment provision process (AEPP) is a prominent preventive strategy used to promote independent living and to identify and alleviate fall risk factors via the provision of assistive equipment within the home environment. Current practice involves the use of paper-based forms that include 2D measurement guidance diagrams that aim to communicate the precise points and dimensions that must be measured in order to make AEPP assessments. There are, however, issues such as “poor fit” of equipment due to inaccurate measurements taken and recorded, resulting in more than 50% of equipment installed within the home being abandoned by patients. This paper presents a novel 3D measurement aid prototype (3D-MAP) that provides enhanced measurement and assessment guidance to patients via the use of 3D visualization technologies. Objective The purpose of this study was to explore the perceptions of older adults with regard to the barriers and opportunities of using the 3D-MAP application as a tool that enables patient self-delivery of the AEPP. Methods Thirty-three community-dwelling older adults participated in interactive sessions with a bespoke 3D-MAP application utilizing the retrospective think-aloud protocol and semistructured focus group discussions. The system usability scale (SUS) questionnaire was used to evaluate the application’s usability. Thematic template analysis was carried out on the SUS item discussions, think-aloud, and semistructured focus group data. Results The quantitative SUS results revealed that the application may be described as having “marginal-high” and “good” levels of usability, along with strong agreement with items relating to the usability (P=.004) and learnability (P<.001) of the application. Four high-level themes emerged from think-aloud and focus groups discussions: (1) perceived usefulness (PU), (2) perceived ease of use (PEOU), (3) application use (AU) and (4) self-assessment (SA). The application was seen as a useful tool to enhance visualization of measurement guidance and also to promote independent living, ownership of care, and potentially reduce waiting times. Several design and functionality recommendations emerged from the study, such as a need to manipulate the view and position of the 3D furniture models, and a need for clearer visual prompts and alternative keyboard interface for measurement entry. Conclusions Participants perceived the 3D-MAP application as a useful tool that has the potential to make significant improvements to the AEPP, not only in terms of accuracy of measurement, but also by potentially enabling older adult patients to carry out the data collection element of the AEPP themselves. Further research is needed to further adapt the 3D-MAP application in line with the study outcomes and to establish its clinical utility with regards to effectiveness, efficiency, accuracy, and reliability of measurements that are recorded using the application and to compare it with 2D measurement guidance leaflets. PMID:28630034

  10. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.

  11. SIMPL: A Simplified Model-Based Program for the Analysis and Visualization of Groundwater Rebound in Abandoned Mines to Prevent Contamination of Water and Soils by Acid Mine Drainage

    PubMed Central

    Kim, Sung-Min

    2018-01-01

    Cessation of dewatering following underground mine closure typically results in groundwater rebound, because mine voids and surrounding strata undergo flooding up to the levels of the decant points, such as shafts and drifts. SIMPL (Simplified groundwater program In Mine workings using the Pipe equation and Lumped parameter model), a simplified lumped parameter model-based program for predicting groundwater levels in abandoned mines, is presented herein. The program comprises a simulation engine module, 3D visualization module, and graphical user interface, which aids data processing, analysis, and visualization of results. The 3D viewer facilitates effective visualization of the predicted groundwater level rebound phenomenon together with a topographic map, mine drift, goaf, and geological properties from borehole data. SIMPL is applied to data from the Dongwon coal mine and Dalsung copper mine in Korea, with strong similarities in simulated and observed results. By considering mine workings and interpond connections, SIMPL can thus be used to effectively analyze and visualize groundwater rebound. In addition, the predictions by SIMPL can be utilized to prevent the surrounding environment (water and soil) from being polluted by acid mine drainage. PMID:29747480

  12. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    PubMed

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  13. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  14. Volumetric visualization of multiple-return LIDAR data: Using voxels

    USGS Publications Warehouse

    Stoker, Jason M.

    2009-01-01

    Elevation data are an important component in the visualization and analysis of geographic information. The creation and display of 3D models representing bare earth, vegetation, and surface structures have become a major focus of light detection and ranging (lidar) remote sensing research in the past few years. Lidar is an active sensor that records the distance, or range, of a laser usually fi red from an airplane, helicopter, or satellite. By converting the millions of 3D lidar returns from a system into bare ground, vegetation, or structural elevation information, extremely accurate, high-resolution elevation models can be derived and produced to visualize and quantify scenes in three dimensions. These data can be used to produce high-resolution bare-earth digital elevation models; quantitative estimates of vegetative features such as canopy height, canopy closure, and biomass; and models of urban areas such as building footprints and 3D city models.

  15. iSee: Teaching Visual Learning in an Organic Virtual Learning Environment

    ERIC Educational Resources Information Center

    Han, Hsiao-Cheng

    2017-01-01

    This paper presents a three-year participatory action research project focusing on the graduate level course entitled Visual Learning in 3D Animated Virtual Worlds. The purpose of this research was to understand "How the virtual world processes of observing and creating can best help students learn visual theories". The first cycle of…

  16. Specialized Computer Systems for Environment Visualization

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  17. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    PubMed

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  18. 3D-Printing Crystallographic Unit Cells for Learning Materials Science and Engineering

    ERIC Educational Resources Information Center

    Rodenbough, Philip P.; Vanti, William B.; Chan, Siu-Wai

    2015-01-01

    Introductory materials science and engineering courses universally include the study of crystal structure and unit cells, which are by their nature highly visual 3D concepts. Traditionally, such topics are explored with 2D drawings or perhaps a limited set of difficult-to-construct 3D models. The rise of 3D printing, coupled with the wealth of…

  19. In situ flash x-ray high-speed computed tomography for the quantitative analysis of highly dynamic processes

    NASA Astrophysics Data System (ADS)

    Moser, Stefan; Nau, Siegfried; Salk, Manfred; Thoma, Klaus

    2014-02-01

    The in situ investigation of dynamic events, ranging from car crash to ballistics, often is key to the understanding of dynamic material behavior. In many cases the important processes and interactions happen on the scale of milli- to microseconds at speeds of 1000 m s-1 or more. Often, 3D information is necessary to fully capture and analyze all relevant effects. High-speed 3D-visualization techniques are thus required for the in situ analysis. 3D-capable optical high-speed methods often are impaired by luminous effects and dust, while flash x-ray based methods usually deliver only 2D data. In this paper, a novel 3D-capable flash x-ray based method, in situ flash x-ray high-speed computed tomography is presented. The method is capable of producing 3D reconstructions of high-speed processes based on an undersampled dataset consisting of only a few (typically 3 to 6) x-ray projections. The major challenges are identified, discussed and the chosen solution outlined. The application is illustrated with an exemplary application of a 1000 m s-1 high-speed impact event on the scale of microseconds. A quantitative analysis of the in situ measurement of the material fragments with a 3D reconstruction with 1 mm voxel size is presented and the results are discussed. The results show that the HSCT method allows gaining valuable visual and quantitative mechanical information for the understanding and interpretation of high-speed events.

  20. Effect of astigmatism on visual acuity in eyes with a diffractive multifocal intraocular lens.

    PubMed

    Hayashi, Ken; Manabe, Shin-Ichi; Yoshida, Motoaki; Hayashi, Hideyuki

    2010-08-01

    To examine the effect of astigmatism on visual acuity at various distances in eyes with a diffractive multifocal intraocular lens (IOL). Hayashi Eye Hospital, Fukuoka, Japan. In this study, eyes had implantation of a diffractive multifocal IOL with a +3.00 diopter (D) addition (add) (AcrySof ReSTOR SN6AD1), a diffractive multifocal IOL with a +4.00 D add (AcrySof ReSTOR SN6AD3), or a monofocal IOL (AcrySof SN60WF). Astigmatism was simulated by adding cylindrical lenses of various diopters (0.00, 0.50, 1.00, 1.50, 2.00), after which distance-corrected acuity was measured at various distances. At most distances, the mean visual acuity in the multifocal IOL groups decreased in proportion to the added astigmatism. With astigmatism of 0.00 D and 0.50 D, distance-corrected near visual acuity (DCNVA) in the +4.00 D group and distance-corrected intermediate visual acuity (DCIVA) and DCNVA in the +3.00 D group were significantly better than in the monofocal group; the corrected distance visual acuity (CDVA) was similar. The DCNVA with astigmatism of 1.00 D was better in 2 multifocal groups; however, with astigmatism of 1.50 D and 2.00 D, the CDVA and DCIVA at 0.5m in the multifocal groups were significantly worse than in the monofocal group, although the DCNVA was similar. With astigmatism of 1.00 D or greater, the mean CDVA and DCNVA in the multifocal groups reached useful levels (20/40). The presence of astigmatism in eyes with a diffractive multifocal IOL compromised all distance visual acuities, suggesting the need to correct astigmatism of greater than 1.00 D. No author has a financial or proprietary interest in any material or method mentioned. Copyright 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  1. Age-Specific Prevalence of Visual Impairment and Refractive Error in Children Aged 3-10 Years in Shanghai, China.

    PubMed

    Ma, Yingyan; Qu, Xiaomei; Zhu, Xiaofeng; Xu, Xun; Zhu, Jianfeng; Sankaridurg, Padmaja; Lin, Senlin; Lu, Lina; Zhao, Rong; Wang, Ling; Shi, Huijing; Tan, Hui; You, Xiaofang; Yuan, Hong; Sun, Sifei; Wang, Mingjin; He, Xiangui; Zou, Haidong; Congdon, Nathan

    2016-11-01

    We assessed changes in age-specific prevalence of refractive error at the time of starting school, by comparing preschool and school age cohorts in Shanghai, China. A cross-sectional study was done in Jiading District, Shanghai during November and December 2013. We randomly selected 7 kindergartens and 7 primary schools, with probability proportionate to size. Chinese children (n = 8398) aged 3 to 10 years were enumerated, and 8267 (98.4%) were included. Children underwent distance visual acuity assessment and refraction measurement by cycloplegic autorefraction and subjective refraction. The prevalence of uncorrected visual acuity (UCVA), presenting visual acuity, and best-corrected visual acuity in the better eye of ≤20/40 was 19.8%, 15.5%, and 1.7%, respectively. Among those with UCVA ≤ 20/40, 93.2% could achieve visual acuity of ≥20/32 with refraction. Only 28.7% (n = 465) of children with UCVA in the better eye of ≤20/40 wore glasses. Prevalence of myopia (spherical equivalent ≤-0.5 diopters [D] in at least one eye) increased from 1.78% in 3-year-olds to 52.2% in 10-year-olds, while prevalence of hyperopia (spherical equivalent ≥+2.0 D) decreased from 17.8% among 3-year-olds to 2.6% by 10 years of age. After adjusting for age, attending elite "high-level" school was statistically associated with greater myopia prevalence. The prevalence of myopia was lower or comparable to that reported in other populations from age 3 to 5 years, but increased dramatically after 6 years, consistent with a strong environmental role of schooling on myopia development.

  2. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  3. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  4. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  5. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  6. A study of perceptual analysis in a high-level autistic subject with exceptional graphic abilities.

    PubMed

    Mottron, L; Belleville, S

    1993-11-01

    We report here the case study of a patient (E.C.) with an Asperger syndrome, or autism with quasinormal intelligence, who shows an outstanding ability for three-dimensional drawing of inanimate objects (savant syndrome). An assessment of the subsystems proposed in recent models of object recognition evidenced intact perceptual analysis and identification. The initial (or primal sketch), viewer-centered (or 2-1/2-D), or object-centered (3-D) representations and the recognition and name levels were functional. In contrast, E.C.'s pattern of performance in three different types of tasks converge to suggest an anomaly in the hierarchical organization of the local and global parts of a figure: a local interference effect in incongruent hierarchical visual stimuli, a deficit in relating local parts to global form information in impossible figures, and an absence of feature-grouping in graphic recall. The results are discussed in relation to normal visual perception and to current accounts of the savant syndrome in autism.

  7. Hierarchical Task Network Prototyping In Unity3d

    DTIC Science & Technology

    2016-06-01

    visually debug. Here we present a solution for prototyping HTNs by extending an existing commercial implementation of Behavior Trees within the Unity3D game ...HTN, dynamic behaviors, behavior prototyping, agent-based simulation, entity-level combat model, game engine, discrete event simulation, virtual...commercial implementation of Behavior Trees within the Unity3D game engine prior to building the HTN in COMBATXXI. Existing HTNs were emulated within

  8. Comparison of visual acuity of the patients on the first day after sub-Bowman keratomileusis or laser in situ keratomileusis.

    PubMed

    Zhao, Wei; Wu, Ting; Dong, Ze-Hong; Feng, Jie; Ren, Yu-Feng; Wang, Yu-Sheng

    2016-01-01

    To compare recovery of the visual acuity in patients one day after sub-Bowman keratomileusis (SBK) or laser in situ keratomileusis (LASIK). Data from 5923 eyes in 2968 patients that received LASIK (2755 eyes) or SBK (3168 eyes) were retrospectively analyzed. The eyes were divided into 4 groups according to preoperative spherical equivalent: between -12.00 to -9.00 D, extremely high myopia (n=396, including 192 and 204 in SBK and LASIK groups, respectively); -9.00 to -6.00 D, high myopia (n=1822, including 991 and 831 in SBK and LASIK groups, respectively), -6.00 to -3.00 D, moderate myopia (n=3071, including 1658 and 1413 in SBK and LASIK groups, respectively), and -3.00 to 0.00 D, low myopia (n=634, including 327 and 307 in SBK and LASIK groups, respectively). Uncorrected logMAR visual acuity values of patients were assessed under standard natural light. Analysis of variance was used for comparisons among different groups. Uncorrected visual acuity values were 0.0115±0.1051 and 0.0466±0.1477 at day 1 after operation for patients receiving SBK and LASIK, respectively (P<0.01); visual acuity values of 0.1854±0.1842, 0.0615±0.1326, -0.0033±0.0978, and -0.0164±0.0972 were obtained for patients in the extremely high, high, moderate, and low myopia groups, respectively (P<0.01). In addition, significant differences in visual acuity at day 1 after operation were found between patients receiving SBK and LASIK in each myopia subgroup. Compared with LASIK, SBK is safer and more effective, with faster recovery. Therefore, SBK is more likely to be accepted by patients than LASIK for better uncorrected visual acuity the day following operation.

  9. WarpIV: In situ visualization and analysis of ion accelerator simulations

    DOE PAGES

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc; ...

    2016-05-09

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  10. Full-Thickness Astigmatic Keratotomy Combined With Small-Incision Lenticule Extraction to Treat High-Level and Mixed Astigmatism.

    PubMed

    Kim, Bu Ki; Mun, Su Joung; Lee, Dae Gyu; Kim, Jae Ryun; Kim, Hyun Seung; Chung, Young Taek

    2015-12-01

    To explore the clinical effects of combined full-thickness astigmatic keratotomy and small-incision lenticule extraction (SMILE) in patients who are inoperable using SMILE alone. We included 13 eyes of 9 patients with high-level or mixed astigmatism who underwent full-thickness astigmatic keratotomy followed by SMILE (secondarily) to correct the residual refractive error. Six months after SMILE, the spherical equivalent was reduced from -4.83 ± 3.26 D to -0.17 ± 0.38 D (P < 0.001), and the astigmatism was reduced from 5.12 ± 0.96 D to 0.21 ± 0.22 D (P < 0.001). The uncorrected and corrected (CDVA) distance visual acuities improved from 1.07 ± 0.62 to 0.02 ± 0.13 (P < 0.001) and from 0.08 ± 0.14 to -0.01 ± 0.14 (P = 0.002), respectively. The CDVA improved by 1 or 2 Snellen lines in 8 cases (61.5%), and there was no loss in CDVA. All procedures were completed without intraoperative or postoperative complications. This combined procedure was effective and safe for the treatment of high-level or mixed astigmatism.

  11. Visualizing Gyrokinetic Turbulence in a Tokamak

    NASA Astrophysics Data System (ADS)

    Stantchev, George

    2005-10-01

    Multi-dimensional data output from gyrokinetic microturbulence codes are often difficult to visualize, in part due to the non-trivial geometry of the underlying grids, in part due to high irregularity of the relevant scalar field structures in turbulent regions. For instance, traditional isosurface extraction methods are likely to fail for the electrostatic potential field whose level sets may exhibit various geometric pathologies. To address these issues we develop an advanced interactive 3D gyrokinetic turbulence visualization framework which we apply in the study of microtearing instabilities calculated with GS2 in the MAST and NSTX geometries. In these simulations GS2 uses field-line-following coordinates such that the computational domain maps in physical space to a long, twisting flux tube with strong cross-sectional shear. Using statistical wavelet analysis we create a sparse multiple-scale volumetric representation of the relevant scalar fields, which we visualize via a variation of the so called splatting technique. To handle the problem of highly anisotropic flux tube configurations we adapt a geometry-driven surface illumination algorithm that places local light sources for effective feature-enhanced visualization.

  12. Isolation, electron microscopic imaging, and 3-D visualization of native cardiac thin myofilaments.

    PubMed

    Spiess, M; Steinmetz, M O; Mandinova, A; Wolpensinger, B; Aebi, U; Atar, D

    1999-06-15

    An increasing number of cardiac diseases are currently pinpointed to reside at the level of the thin myofilaments (e.g., cardiomyopathies, reperfusion injury). Hence the aim of our study was to develop a new method for the isolation of mammalian thin myofilaments suitable for subsequent high-resolution electron microscopic imaging. Native cardiac thin myofilaments were extracted from glycerinated porcine myocardial tissue in the presence of protease inhibitors. Separation of thick and thin myofilaments was achieved by addition of ATP and several centrifugation steps. Negative staining and subsequent conventional and scanning transmission electron microscopy (STEM) of thin myofilaments permitted visualization of molecular details; unlike conventional preparations of thin myofilaments, our method reveals the F-actin moiety and allows direct recognition of thin myofilament-associated porcine cardiac troponin complexes. They appear as "bulges" at regular intervals of approximately 36 nm along the actin filaments. Protein analysis using SDS-polyacrylamide gel electrophoresis revealed that only approximately 20% troponin I was lost during the isolation procedure. In a further step, 3-D helical reconstructions were calculated using STEM dark-field images. These 3-D reconstructions will allow further characterization of molecular details, and they will be useful for directly visualizing molecular alterations related to diseased cardiac thin myofilaments (e.g., reperfusion injury, alterations of Ca2+-mediated tropomyosin switch). Copyright 1999 Academic Press.

  13. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  14. Simulating 3D deformation using connected polygons

    NASA Astrophysics Data System (ADS)

    Tarigan, J. T.; Jaya, I.; Hardi, S. M.; Zamzami, E. M.

    2018-03-01

    In modern 3D application, interaction between user and the virtual world is one of an important factor to increase the realism. This interaction can be visualized in many forms; one of them is object deformation. There are many ways to simulate object deformation in virtual 3D world; each comes with different level of realism and performance. Our objective is to present a new method to simulate object deformation by using a graph-connected polygon. In this solution, each object contains multiple level of polygons in different level of volume. The proposed solution focusses on performance rather while maintaining the acceptable level of realism. In this paper, we present the design and implementation of our solution and show that this solution is usable in performance sensitive 3D application such as games and virtual reality.

  15. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  16. A 3D particle visualization system for temperature management

    NASA Astrophysics Data System (ADS)

    Lange, B.; Rodriguez, N.; Puech, W.; Rey, H.; Vasques, X.

    2011-01-01

    This paper deals with a 3D visualization technique proposed to analyze and manage energy efficiency from a data center. Data are extracted from sensors located in the IBM Green Data Center in Montpellier France. These sensors measure different information such as hygrometry, pressure and temperature. We want to visualize in real-time the large among of data produced by these sensors. A visualization engine has been designed, based on particles system and a client server paradigm. In order to solve performance problems, a Level Of Detail solution has been developed. These methods are based on the earlier work introduced by J. Clark in 1976. In this paper we introduce a particle method used for this work and subsequently we explain different simplification methods applied to improve our solution.

  17. An integrated 3-Dimensional Genome Modeling Engine for data-driven simulation of spatial genome organization.

    PubMed

    Szałaj, Przemysław; Tang, Zhonghui; Michalski, Paul; Pietal, Michal J; Luo, Oscar J; Sadowski, Michał; Li, Xingwang; Radew, Kamen; Ruan, Yijun; Plewczynski, Dariusz

    2016-12-01

    ChIA-PET is a high-throughput mapping technology that reveals long-range chromatin interactions and provides insights into the basic principles of spatial genome organization and gene regulation mediated by specific protein factors. Recently, we showed that a single ChIA-PET experiment provides information at all genomic scales of interest, from the high-resolution locations of binding sites and enriched chromatin interactions mediated by specific protein factors, to the low resolution of nonenriched interactions that reflect topological neighborhoods of higher-order chromosome folding. This multilevel nature of ChIA-PET data offers an opportunity to use multiscale 3D models to study structural-functional relationships at multiple length scales, but doing so requires a structural modeling platform. Here, we report the development of 3D-GNOME (3-Dimensional Genome Modeling Engine), a complete computational pipeline for 3D simulation using ChIA-PET data. 3D-GNOME consists of three integrated components: a graph-distance-based heat map normalization tool, a 3D modeling platform, and an interactive 3D visualization tool. Using ChIA-PET and Hi-C data derived from human B-lymphocytes, we demonstrate the effectiveness of 3D-GNOME in building 3D genome models at multiple levels, including the entire genome, individual chromosomes, and specific segments at megabase (Mb) and kilobase (kb) resolutions of single average and ensemble structures. Further incorporation of CTCF-motif orientation and high-resolution looping patterns in 3D simulation provided additional reliability of potential biologically plausible topological structures. © 2016 Szałaj et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    NASA Technical Reports Server (NTRS)

    Bancroft, Gordon; Plessel, Todd; Merritt, Fergus; Watson, Val

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers.

  19. Noise Pollution Aspects of Barge, Railroad, and Truck Transportation,

    DTIC Science & Technology

    1975-04-01

    attenuation alone. (f) Commins, et al, conclude that: (1) a lush vegetative cover will reduce dB( A ) levels by a factor of 4.5 - 4.8dB(A) instead of 3dB...prevailed (Figure lOc). Visual evidence of how much lapse rates affect dB( A ) levels experienced at varying distances from a line source of sound is made...possible by comparing the dB( A ) levels shown in Figures 9a, b and c at varying distances that would occur based solely on geometric attenuation (Figure 11

  20. Recent developments in stereoscopic and holographic 3D display technologies

    NASA Astrophysics Data System (ADS)

    Sarma, Kalluri

    2014-06-01

    Currently, there is increasing interest in the development of high performance 3D display technologies to support a variety of applications including medical imaging, scientific visualization, gaming, education, entertainment, air traffic control and remote operations in 3D environments. In this paper we will review the attributes of the various 3D display technologies including stereoscopic and holographic 3D, human factors issues of stereoscopic 3D, the challenges in realizing Holographic 3D displays and the recent progress in these technologies.

  1. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    PubMed

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  2. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    NASA Astrophysics Data System (ADS)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  3. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    NASA Technical Reports Server (NTRS)

    Blumenfeld, E. H.; Evans, C. A.; Oshel, E. R.; Liddle, D. A.; Beaulieu, K.; Zeigler, R. A.; Hanna, R. D.; Ketcham, R. A.

    2016-01-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro- XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.

  4. An Interdisciplinary Method for the Visualization of Novel High-Resolution Precision Photography and Micro-XCT Data Sets of NASA's Apollo Lunar Samples and Antarctic Meteorite Samples to Create Combined Research-Grade 3D Virtual Samples for the Benefit of Astromaterials Collections Conservation, Curation, Scientific Research and Education

    NASA Astrophysics Data System (ADS)

    Blumenfeld, E. H.; Evans, C. A.; Zeigler, R. A.; Righter, K.; Beaulieu, K. R.; Oshel, E. R.; Liddle, D. A.; Hanna, R.; Ketcham, R. A.; Todd, N. S.

    2016-12-01

    New technologies make possible the advancement of documentation and visualization practices that can enhance conservation and curation protocols for NASA's Astromaterials Collections. With increasing demands for accessibility to updated comprehensive data, and with new sample return missions on the horizon, it is of primary importance to develop new standards for contemporary documentation and visualization methodologies. Our interdisciplinary team has expertise in the fields of heritage conservation practices, professional photography, photogrammetry, imaging science, application engineering, data curation, geoscience, and astromaterials curation. Our objective is to create virtual 3D reconstructions of Apollo Lunar and Antarctic Meteorite samples that are a fusion of two state-of-the-art data sets: the interior view of the sample by collecting Micro-XCT data and the exterior view of the sample by collecting high-resolution precision photography data. These new data provide researchers an information-rich visualization of both compositional and textural information prior to any physical sub-sampling. Since January 2013 we have developed a process that resulted in the successful creation of the first image-based 3D reconstruction of an Apollo Lunar Sample correlated to a 3D reconstruction of the same sample's Micro-XCT data, illustrating that this technique is both operationally possible and functionally beneficial. In May of 2016 we began a 3-year research period during which we aim to produce Virtual Astromaterials Samples for 60 high-priority Apollo Lunar and Antarctic Meteorite samples and serve them on NASA's Astromaterials Acquisition and Curation website. Our research demonstrates that research-grade Virtual Astromaterials Samples are beneficial in preserving for posterity a precise 3D reconstruction of the sample prior to sub-sampling, which greatly improves documentation practices, provides unique and novel visualization of the sample's interior and exterior features, offers scientists a preliminary research tool for targeted sub-sample requests, and additionally is a visually engaging interactive tool for bringing astromaterials science to the public.

  5. Using 3D Printers to Model Earth Surface Topography for Increased Student Understanding and Retention

    NASA Astrophysics Data System (ADS)

    Thesenga, David; Town, James

    2014-05-01

    In February 2000, the Space Shuttle Endeavour flew a specially modified radar system during an 11-day mission. The purpose of the multinational Shuttle Radar Topography Mission (SRTM) was to "obtain elevation data on a near-global scale to generate the most complete high-resolution digital topographic database of Earth" by using radar interferometry. The data and resulting products are now publicly available for download and give a view of the landscape removed of vegetation, buildings, and other structures. This new view of the Earth's topography allows us to see previously unmapped or poorly mapped regions of the Earth as well as providing a level of detail that was previously unknown using traditional topographic mapping techniques. Understanding and appreciating the geographic terrain is a complex but necessary requirement for middle school aged (11-14yo) students. Abstract in nature, topographic maps and other 2D renderings of the Earth's surface and features do not address the inherent spatial challenges of a concrete-learner and traditional methods of teaching can at times exacerbate the problem. Technological solutions such as 3D-imaging in programs like Google Earth are effective but lack the tactile realness that can make a large difference in learning comprehension and retention for these young students. First developed in the 1980's, 3D printers were not commercial reality until recently and the rapid rise in interest has driven down the cost. With the advent of sub US1500 3D printers, this technology has moved out of the high-end marketplace and into the local office supply store. Schools across the US and elsewhere in the world are adding 3D printers to their technological workspaces and students have begun rapid-prototyping and manufacturing a variety of projects. This project attempted to streamline the process of transforming SRTM data from a GeoTIFF format by way of Python code. The resulting data was then inputted into a CAD-based program for visualization and exporting as a .stl file for 3D printing. A proposal for improving the method and making it more accessible to middle school aged students is provided. Using the SRTM data to print a hand-held visual representation of a portion of the Earth's surface would utilize existing technology in the school and alter how topography can be taught in the classroom. Combining methods of 2D paper representations, on-screen 3D visualizations, and 3D hand-held models, give students the opportunity to truly grasp and retain the information being provided.

  6. High End Visualization of Geophysical Datasets Using Immersive Technology: The SIO Visualization Center.

    NASA Astrophysics Data System (ADS)

    Newman, R. L.

    2002-12-01

    How many images can you display at one time with Power Point without getting "postage stamps"? Do you have fantastic datasets that you cannot view because your computer is too slow/small? Do you assume a few 2-D images of a 3-D picture are sufficient? High-end visualization centers can minimize and often eliminate these problems. The new visualization center [http://siovizcenter.ucsd.edu] at Scripps Institution of Oceanography [SIO] immerses users into a virtual world by projecting 3-D images onto a Panoram GVR-120E wall-sized floor-to-ceiling curved screen [7' x 23'] that has 3.2 mega-pixels of resolution. The Infinite Reality graphics subsystem is driven by a single-pipe SGI Onyx 3400 with a system bandwidth of 44 Gbps. The Onyx is powered by 16 MIPS R12K processors and 16 GB of addressable memory. The system is also equipped with transmitters and LCD shutter glasses which permit stereographic 3-D viewing of high-resolution images. This center is ideal for groups of up to 60 people who can simultaneously view these large-format images. A wide range of hardware and software is available, giving the users a totally immersive working environment in which to display, analyze, and discuss large datasets. The system enables simultaneous display of video and audio streams from sources such as SGI megadesktop and stereo megadesktop, S-VHS video, DVD video, and video from a Macintosh or PC. For instance, one-third of the screen might be displaying S-VHS video from a remotely-operated-vehicle [ROV], while the remaining portion of the screen might be used for an interactive 3-D flight over the same parcel of seafloor. The video and audio combinations using this system are numerous, allowing users to combine and explore data and images in innovative ways, greatly enhancing scientists' ability to visualize, understand and collaborate on complex datasets. In the not-distant future, with the rapid growth in networking speeds in the US, it will be possible for Earth Sciences Departments to collaborate effectively while limiting the amount of physical travel required. This includes porting visualization content to the popular, low-cost Geowall visualization systems, and providing web-based access to databanks filled with stock geoscience visualizations.

  7. High-contrast differentiation resolution 3D imaging of rodent brain by X-ray computed microtomography

    NASA Astrophysics Data System (ADS)

    Zikmund, T.; Novotná, M.; Kavková, M.; Tesařová, M.; Kaucká, M.; Szarowská, B.; Adameyko, I.; Hrubá, E.; Buchtová, M.; Dražanová, E.; Starčuk, Z.; Kaiser, J.

    2018-02-01

    The biomedically focused brain research is largely performed on laboratory mice considering a high homology between the human and mouse genomes. A brain has an intricate and highly complex geometrical structure that is hard to display and analyse using only 2D methods. Applying some fast and efficient methods of brain visualization in 3D will be crucial for the neurobiology in the future. A post-mortem analysis of experimental animals' brains usually involves techniques such as magnetic resonance and computed tomography. These techniques are employed to visualize abnormalities in the brains' morphology or reparation processes. The X-ray computed microtomography (micro CT) plays an important role in the 3D imaging of internal structures of a large variety of soft and hard tissues. This non-destructive technique is applied in biological studies because the lab-based CT devices enable to obtain a several-micrometer resolution. However, this technique is always used along with some visualization methods, which are based on the tissue staining and thus differentiate soft tissues in biological samples. Here, a modified chemical contrasting protocol of tissues for a micro CT usage is introduced as the best tool for ex vivo 3D imaging of a post-mortem mouse brain. This way, the micro CT provides a high spatial resolution of the brain microscopic anatomy together with a high tissue differentiation contrast enabling to identify more anatomical details in the brain. As the micro CT allows a consequent reconstruction of the brain structures into a coherent 3D model, some small morphological changes can be given into context of their mutual spatial relationships.

  8. Computer vision for RGB-D sensors: Kinect and its applications.

    PubMed

    Shao, Ling; Han, Jungong; Xu, Dong; Shotton, Jamie

    2013-10-01

    Kinect sensor, high-resolution depth and visual (RGB) sensing has become available for widespread use as an off-the-shelf technology. This special issue is specifically dedicated to new algorithms and/or new applications based on the Kinect (or similar RGB-D) sensors. In total, we received over ninety submissions from more than twenty countries all around the world. The submissions cover a wide range of areas including object and scene classification, 3-D pose estimation, visual tracking, data fusion, human action/activity recognition, 3-D reconstruction, mobile robotics, and so on. After two rounds of review by at least two (mostly three) expert reviewers for each paper, the Guest Editors have selected twelve high-quality papers to be included in this highly popular special issue. The papers that comprise this issue are briefly summarized.

  9. The Effect of Dioptric Blur on Reading Performance

    PubMed Central

    Chung, Susana T.L.; Jarvis, Samuel H.; Cheung, Sing-Hang

    2013-01-01

    Little is known about the systematic impact of blur on reading performance. The purpose of this study was to quantify the effect of dioptric blur on reading performance in a group of normally sighted young adults. We measured monocular reading performance and visual acuity for 19 observers with normal vision, for five levels of optical blur (no blur, 0.5, 1, 2 and 3D). Dioptric blur was induced using convex trial lenses placed in front of the testing eye, with the pupil dilated and in the presence of a 3 mm artificial pupil. Reading performance was assessed using eight versions of the MNREAD Acuity Chart. For each level of dioptric blur, observers read aloud sentences on one of these charts, from large to small print. Reading time for each sentence and the number of errors made were recorded and converted to reading speed in words per minute. Visual acuity was measured using 4-orientation Landolt C stimuli. For all levels of dioptric blur, reading speed increased with print size up to a certain print size and then remained constant at the maximum reading speed. By fitting nonlinear mixed-effects models, we found that the maximum reading speed was minimally affected by blur up to 2D, but was ~23% slower for 3D of blur. When the amount of blur increased from 0 (no-blur) to 3D, the threshold print size (print size corresponded to 80% of the maximum reading speed) increased from 0.01 to 0.88 logMAR, reading acuity worsened from −0.16 to 0.58 logMAR, and visual acuity worsened from −0.19 to 0.64 logMAR. The similar rates of change with blur for threshold print size, reading acuity and visual acuity implicates that visual acuity is a good predictor of threshold print size and reading acuity. Like visual acuity, reading performance is susceptible to the degrading effect of optical blur. For increasing amount of blur, larger print sizes are required to attain the maximum reading speed. PMID:17442363

  10. A 3D visualization of spatial relationship between geological structure and groundwater chemical profile around Iwate volcano, Japan: based on the ARCGIS 3D Analyst

    NASA Astrophysics Data System (ADS)

    Shibahara, A.; Ohwada, M.; Itoh, J.; Kazahaya, K.; Tsukamoto, H.; Takahashi, M.; Morikawa, N.; Takahashi, H.; Yasuhara, M.; Inamura, A.; Oyama, Y.

    2009-12-01

    We established 3D geological and hydrological model around Iwate volcano to visualize 3D relationships between subsurface structure and groundwater profile. Iwate volcano is a typical polygenetic volcano located in NE Japan, and its body is composed of two stratovolcanoes which have experienced sector collapses several times. Because of this complex structure, groundwater flow around Iwate volcano is strongly restricted by subsurface construction. For example, Kazahaya and Yasuhara (1999) clarified that shallow groundwater in north and east flanks of Iwate volcano are recharged at the mountaintop, and these flow systems are restricted in north and east area because of the structure of younger volcanic body collapse. In addition, Ohwada et al. (2006) found that these shallow groundwater in north and east flanks have relatively high concentration of major chemical components and high 3He/4He ratios. In this study, we succeeded to visualize the spatial relationship between subsurface structure and chemical profile of shallow and deep groundwater system using 3D model on the GIS. In the study region, a number of geological and hydrological datasets, such as boring log data and groundwater chemical profile, were reported. All these paper data are digitized and converted to meshed data on the GIS, and plotted in the three dimensional space to visualize spatial distribution. We also inputted digital elevation model (DEM) around Iwate volcano issued by the Geographical Survey Institute of Japan, and digital geological maps issued by Geological Survey of Japan, AIST. All 3D models are converted into VRML format, and can be used as a versatile dataset on personal computer.

  11. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  12. Use of Colour and Interactive Animation in Learning 3D Vectors

    ERIC Educational Resources Information Center

    Iskander, Wejdan; Curtis, Sharon

    2005-01-01

    This study investigated the effects of two computer-implemented techniques (colour and interactive animation) on learning 3D vectors. The participants were 43 female Saudi Arabian high school students. They were pre-tested on 3D vectors using a paper questionnaire that consisted of calculation and visualization types of questions. The students…

  13. Susceptibility weighted imaging of cartilage canals in porcine epiphyseal growth cartilage ex vivo and in vivo.

    PubMed

    Nissi, Mikko J; Toth, Ferenc; Zhang, Jinjin; Schmitter, Sebastian; Benson, Michael; Carlson, Cathy S; Ellermann, Jutta M

    2014-06-01

    High-resolution visualization of cartilage canals has been restricted to histological methods and contrast-enhanced imaging. In this study, the feasibility of non-contrast-enhanced susceptibility weighted imaging (SWI) for visualization of the cartilage canals was investigated ex vivo at 9.4 T, further explored at 7 and 3 T and demonstrated in vivo at 7 T, using a porcine animal model. SWI scans of specimens of distal femur and humerus from 1 to 8 week-old piglets were conducted at 9.4 T using 3D-GRE sequence and SWI post-processing. The stifle joints of a 2-week old piglet were scanned ex vivo at 7 and 3 T. Finally, the same sites of a 3-week-old piglet were scanned, in vivo, at 7 T under general anesthesia using the vendor-provided sequences. High-contrast visualization of the cartilage canals was obtained ex vivo, especially at higher field strengths; the results were confirmed histologically. In vivo feasibility was demonstrated at 7 T and comparison of ex vivo scans at 3 and 7 T indicated feasibility of using SWI at 3 T. High-resolution 3D visualization of cartilage canals was demonstrated using SWI. This demonstration of fully noninvasive visualization opens new avenues to explore skeletal maturation and the role of vascular supply for diseases such as osteochondrosis. Copyright © 2013 Wiley Periodicals, Inc.

  14. Spatial Skill Profile of Mathematics Pre-Service Teachers

    NASA Astrophysics Data System (ADS)

    Putri, R. O. E.

    2018-01-01

    This study is aimed to investigate the spatial intelligence of mathematics pre-service teachers and find the best instructional strategy that facilitates this aspect. Data were collected from 35 mathematics pre-service teachers. The Purdue Spatial Visualization Test (PSVT) was used to identify the spatial skill of mathematics pre-service teachers. Statistical analysis indicate that more than 50% of the participants possessed spatial skill in intermediate level, whereas the other were in high and low level of spatial skill. The result also shows that there is a positive correlation between spatial skill and mathematics ability, especially in geometrical problem solving. High spatial skill students tend to have better mathematical performance compare to those in two other levels. Furthermore, qualitative analysis reveals that most students have difficulty in manipulating geometrical objects mentally. This problem mostly appears in intermediate and low-level spatial skill students. The observation revealed that 3-D geometrical figures is the best method that can overcome the mentally manipulation problem and develop the spatial visualization. Computer application can also be used to improve students’ spatial skill.

  15. Trapezius muscle activity increases during near work activity regardless of accommodation/vergence demand level.

    PubMed

    Richter, H O; Zetterberg, C; Forsman, M

    2015-07-01

    To investigate if trapezius muscle activity increases over time during visually demanding near work. The vision task consisted of sustained focusing on a contrast-varying black and white Gabor grating. Sixty-six participants with a median age of 38 (range 19-47) fixated the grating from a distance of 65 cm (1.5 D) during four counterbalanced 7-min periods: binocularly through -3.5 D lenses, and monocularly through -3.5 D, 0 D and +3.5 D. Accommodation, heart rate variability and trapezius muscle activity were recorded in parallel. General estimating equation analyses showed that trapezius muscle activity increased significantly over time in all four lens conditions. A concurrent effect of accommodation response on trapezius muscle activity was observed with the minus lenses irrespective of whether incongruence between accommodation and convergence was present or not. Trapezius muscle activity increased significantly over time during the near work task. The increase in muscle activity over time may be caused by an increased need of mental effort and visual attention to maintain performance during the visual tasks to counteract mental fatigue.

  16. [Visualization of Anterolateral Ligament of the Knee Using 3D Reconstructed Variable Refocus Flip Angle-Turbo Spin Echo T2 Weighted Image].

    PubMed

    Yokosawa, Kenta; Sasaki, Kana; Muramatsu, Koichi; Ono, Tomoya; Izawa, Hiroyuki; Hachiya, Yudo

    2016-05-01

    Anterolateral ligament (ALL) is one of the lateral structures in the knee that contributes to the internal rotational stability of tibia. ALL has been referred to in some recent reports to re-emphasize its importance. We visualized the ALL on 3D-MRI in 32 knees of 27 healthy volunteers (23 male knees, 4 female knees; mean age: 37 years). 3D-MRIs were performed using 1.5-T scanner [T(2) weighted image (WI), SPACE: Sampling Perfection with Application optimized Contrast using different flip angle Evolutions] in the knee extended positions. The visualization rate of the ALL, the mean angle to the lateral collateral ligament (LCL), and the width and the thickness of the ALL at the joint level were investigated. The visualization rate was 100%. The mean angle to the LCL was 10.6 degrees. The mean width and the mean thickness of the ALL were 6.4 mm and 1.0 mm, respectively. The ALL is a very thin ligament with a somewhat oblique course between the lateral femoral epicondyle and the mid-third area of lateral tibial condyle. Therefore, the slice thickness and the slice angle can easily affect the ALL visualization. 3D-MRI enables acquiring thin-slice imaging data over a relatively short time, and arbitrary sections aligned with the course of the ALL can later be selected.

  17. Progressive 3D shape abstraction via hierarchical CSG tree

    NASA Astrophysics Data System (ADS)

    Chen, Xingyou; Tang, Jin; Li, Chenglong

    2017-06-01

    A constructive solid geometry(CSG) tree model is proposed to progressively abstract 3D geometric shape of general object from 2D image. Unlike conventional ones, our method applies to general object without the need for massive CAD models, and represents the object shapes in a coarse-to-fine manner that allows users to view temporal shape representations at any time. It stands in a transitional position between 2D image feature and CAD model, benefits from state-of-the-art object detection approaches and better initializes CAD model for finer fitting, estimates 3D shape and pose parameters of object at different levels according to visual perception objective, in a coarse-to-fine manner. Two main contributions are the application of CSG building up procedure into visual perception, and the ability of extending object estimation result into a more flexible and expressive model than 2D/3D primitive shapes. Experimental results demonstrate the feasibility and effectiveness of the proposed approach.

  18. MRI of the hip at 7T: feasibility of bone microarchitecture, high-resolution cartilage, and clinical imaging.

    PubMed

    Chang, Gregory; Deniz, Cem M; Honig, Stephen; Egol, Kenneth; Regatte, Ravinder R; Zhu, Yudong; Sodickson, Daniel K; Brown, Ryan

    2014-06-01

    To demonstrate the feasibility of performing bone microarchitecture, high-resolution cartilage, and clinical imaging of the hip at 7T. This study had Institutional Review Board approval. Using an 8-channel coil constructed in-house, we imaged the hips of 15 subjects on a 7T magnetic resonance imaging (MRI) scanner. We applied: 1) a T1-weighted 3D fast low angle shot (3D FLASH) sequence (0.23 × 0.23 × 1-1.5 mm(3) ) for bone microarchitecture imaging; 2) T1-weighted 3D FLASH (water excitation) and volumetric interpolated breath-hold examination (VIBE) sequences (0.23 × 0.23 × 1.5 mm(3) ) with saturation or inversion recovery-based fat suppression for cartilage imaging; 3) 2D intermediate-weighted fast spin-echo (FSE) sequences without and with fat saturation (0.27 × 0.27 × 2 mm) for clinical imaging. Bone microarchitecture images allowed visualization of individual trabeculae within the proximal femur. Cartilage was well visualized and fat was well suppressed on FLASH and VIBE sequences. FSE sequences allowed visualization of cartilage, the labrum (including cartilage and labral pathology), joint capsule, and tendons. This is the first study to demonstrate the feasibility of performing a clinically comprehensive hip MRI protocol at 7T, including high-resolution imaging of bone microarchitecture and cartilage, as well as clinical imaging. Copyright © 2013 Wiley Periodicals, Inc.

  19. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  20. MATISSE a web-based tool to access, visualize and analyze high resolution minor bodies observation

    NASA Astrophysics Data System (ADS)

    Zinzi, Angelo; Capria, Maria Teresa; Palomba, Ernesto; Antonelli, Lucio Angelo; Giommi, Paolo

    2016-07-01

    In the recent years planetary exploration missions acquired data from minor bodies (i.e., dwarf planets, asteroid and comets) at a detail level never reached before. Since these objects often present very irregular shapes (as in the case of the comet 67P Churyumov-Gerasimenko target of the ESA Rosetta mission) "classical" bidimensional projections of observations are difficult to understand. With the aim of providing the scientific community a tool to access, visualize and analyze data in a new way, ASI Science Data Center started to develop MATISSE (Multi-purposed Advanced Tool for the Instruments for the Solar System Exploration - http://tools.asdc.asi.it/matisse.jsp) in late 2012. This tool allows 3D web-based visualization of data acquired by planetary exploration missions: the output could either be the straightforward projection of the selected observation over the shape model of the target body or the visualization of a high-order product (average/mosaic, difference, ratio, RGB) computed directly online with MATISSE. Standard outputs of the tool also comprise downloadable files to be used with GIS software (GeoTIFF and ENVI format) and 3D very high-resolution files to be viewed by means of the free software Paraview. During this period the first and most frequent exploitation of the tool has been related to visualization of data acquired by VIRTIS-M instruments onboard Rosetta observing the comet 67P. The success of this task, well represented by the good number of published works that used images made with MATISSE confirmed the need of a different approach to correctly visualize data coming from irregular shaped bodies. In the next future the datasets available to MATISSE are planned to be extended, starting from the addition of VIR-Dawn observations of both Vesta and Ceres and also using standard protocols to access data stored in external repositories, such as NASA ODE and Planetary VO.

  1. BioSig3D: High Content Screening of Three-Dimensional Cell Culture Models

    PubMed Central

    Bilgin, Cemal Cagatay; Fontenay, Gerald; Cheng, Qingsu; Chang, Hang; Han, Ju; Parvin, Bahram

    2016-01-01

    BioSig3D is a computational platform for high-content screening of three-dimensional (3D) cell culture models that are imaged in full 3D volume. It provides an end-to-end solution for designing high content screening assays, based on colony organization that is derived from segmentation of nuclei in each colony. BioSig3D also enables visualization of raw and processed 3D volumetric data for quality control, and integrates advanced bioinformatics analysis. The system consists of multiple computational and annotation modules that are coupled together with a strong use of controlled vocabularies to reduce ambiguities between different users. It is a web-based system that allows users to: design an experiment by defining experimental variables, upload a large set of volumetric images into the system, analyze and visualize the dataset, and either display computed indices as a heatmap, or phenotypic subtypes for heterogeneity analysis, or download computed indices for statistical analysis or integrative biology. BioSig3D has been used to profile baseline colony formations with two experiments: (i) morphogenesis of a panel of human mammary epithelial cell lines (HMEC), and (ii) heterogeneity in colony formation using an immortalized non-transformed cell line. These experiments reveal intrinsic growth properties of well-characterized cell lines that are routinely used for biological studies. BioSig3D is being released with seed datasets and video-based documentation. PMID:26978075

  2. Visualization of stereoscopic anatomic models of the paranasal sinuses and cervical vertebrae from the surgical and procedural perspective.

    PubMed

    Chen, Jian; Smith, Andrew D; Khan, Majid A; Sinning, Allan R; Conway, Marianne L; Cui, Dongmei

    2017-11-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal cavity, septum, turbinates, paranasal sinuses, optic nerve, pituitary gland, carotid artery, cervical vertebrae, atlanto-axial joint, cervical spinal cord, cervical nerve root, and vertebral artery that can be used to teach clinical trainees (students, residents, and fellows) approaches for trans-sphenoidal pituitary surgery and cervical spine injection procedure. Volume, surface rendering and a new rendering technique, semi-auto-combined, were applied in the study. These models enable visualization, manipulation, and interaction on a computer and can be presented in a stereoscopic 3D virtual environment, which makes users feel as if they are inside the model. Anat Sci Educ 10: 598-606. © 2017 American Association of Anatomists. © 2017 American Association of Anatomists.

  3. Mental visualization of objects from cross-sectional images

    PubMed Central

    Wu, Bing; Klatzky, Roberta L.; Stetten, George D.

    2011-01-01

    We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in-situ vs. ex-situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex-situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level. PMID:22217386

  4. The Auckland Cataract Study: 2 year postoperative assessment of aspects of clinical, visual, corneal topographic and satisfaction outcomes

    PubMed Central

    Thompson, A M; Sachdev, N; Wong, T; Riley, A F; Grupcheva, C N; McGhee, C N

    2004-01-01

    Aim: To assess clinical, visual, computerised corneal topographic, and subjective satisfaction with visual acuity, in a cohort of subjects 2 years after phacoemulsification surgery in a public hospital in New Zealand. Methods: Prospective study of a representative sample of 97 subjects (20%) randomly selected from 480 subjects in the original Auckland Cataract Study (ACS) cohort. The clinical assessment protocol was identical to the ACS and included an extensive questionnaire to enable direct comparisons to be made between the two groups. Results: The study population was predominantly female (66%) with a mean age of 76.3 (SD 9.9) years. New systemic and ocular disease affected 18.4% and 10.3% of subjects respectively, and 10.3% required referral to either a general practitioner (2.1%) or ophthalmologist (8.2%). Mean best spectacle corrected visual acuity (BSCVA) was 0.2 (0.2) logMAR units (6/9 Snellen equivalent), with mean spherical equivalent −0.37 (1.01) dioptres (D) and astigmatism −1.07 (0.70) D 2 years postoperatively, compared to mean BSCVA 0.1 (0.2) logMAR units (6/7.5 Snellen equivalent), spherical equivalent −0.59 (1.07) D, and astigmatism −1.14 (0.77) D 4 weeks after surgery. 94.9% of subjects retained a BSCVA of 6/12 or better, irrespective of pre-existing ocular disease. The overall posterior capsule opacification (PCO) rate was 20.4% and this was visually insignificant in all but 3.1% of eyes that had already undergone Nd:YAG posterior capsulotomy. Orbscan II elevation technology demonstrated corneal stability 2 years after uncomplicated phacoemulsification. Although corneal astigmatism was eliminated in approximately half of the subjects 1 month postoperatively, astigmatism showed a tendency to regress towards the preoperative level with local corneal thickening at the site of incision 2 years after cataract surgery. Of fellow eyes, 61.2% had undergone cataract surgery. Overall, 75.3% of subjects were moderately to very satisfied with their current level of visual acuity. Conclusion: Two years after cataract surgery subjects are generally satisfied with their current level of vision and distance BSCVA is 6/12 or better in the majority of eyes. Although only a minority of eyes develop sufficient PCO to require capsulotomy 10.3% of eyes develop new vision threatening ocular pathology. PMID:15258022

  5. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    NASA Astrophysics Data System (ADS)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  6. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  7. Molecular Eigensolution Symmetry Analysis and Fine Structure

    PubMed Central

    Harter, William G.; Mitchell, Justin C.

    2013-01-01

    Spectra of high-symmetry molecules contain fine and superfine level cluster structure related to J-tunneling between hills and valleys on rovibronic energy surfaces (RES). Such graphic visualizations help disentangle multi-level dynamics, selection rules, and state mixing effects including widespread violation of nuclear spin symmetry species. A review of RES analysis compares it to that of potential energy surfaces (PES) used in Born–Oppenheimer approximations. Both take advantage of adiabatic coupling in order to visualize Hamiltonian eigensolutions. RES of symmetric and D2 asymmetric top rank-2-tensor Hamiltonians are compared with Oh spherical top rank-4-tensor fine-structure clusters of 6-fold and 8-fold tunneling multiplets. Then extreme 12-fold and 24-fold multiplets are analyzed by RES plots of higher rank tensor Hamiltonians. Such extreme clustering is rare in fundamental bands but prevalent in hot bands, and analysis of its superfine structure requires more efficient labeling and a more powerful group theory. This is introduced using elementary examples involving two groups of order-6 (C6 and D3~C3v), then applied to families of Oh clusters in SF6 spectra and to extreme clusters. PMID:23344041

  8. Evaluation of stereoscopic display with visual function and interview

    NASA Astrophysics Data System (ADS)

    Okuyama, Fumio

    1999-05-01

    The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.

  9. Visual Search Load Effects on Age-Related Cognitive Decline: Evidence From the Yakumo Longitudinal Study.

    PubMed

    Hatta, Takeshi; Kato, Kimiko; Hotta, Chie; Higashikawa, Mari; Iwahara, Akihiko; Hatta, Taketoshi; Hatta, Junko; Fujiwara, Kazumi; Nagahara, Naoko; Ito, Emi; Hamajima, Nobuyuki

    2017-01-01

    The validity of Bucur and Madden's (2010) proposal that an age-related decline is particularly pronounced in executive function measures rather than in elementary perceptual speed measures was examined via the Yakumo Study longitudinal database. Their proposal suggests that cognitive load differentially affects cognitive abilities in older adults. To address their proposal, linear regression coefficients of 104 participants were calculated individually for the digit cancellation task 1 (D-CAT1), where participants search for a given single digit, and the D-CAT3, where they search for 3 digits simultaneously. Therefore, it can be conjectured that the D-CAT1 represents primarily elementary perceptual speed and low-visual search load task. whereas the D-CAT3 represents primarily executive function and high-visual search load task. Regression coefficients from age 65 to 75 for the D-CAT3 showed a significantly steeper decline than that for the D-CAT1, and a large number of participants showed this tendency. These results support the proposal by Brcur and Madden (2010) and suggest that the degree of cognitive load affects age-related cognitive decline.

  10. Experimental Investigation of the Near Wall Flow Structure of a Low Reynolds Number 3-D Turbulent Boundary Layer

    NASA Technical Reports Server (NTRS)

    Fleming, J. L.; Simpson, R. L.

    1997-01-01

    Laser Doppler velocimetry (LDV) measurements and hydrogen bubble flow visualization techniques were used to examine the near-wall flow structure of 2D and 3D turbulent boundary layers (TBLs) over a range of low Reynolds numbers. The goals of this research were (1) an increased understanding of the flow physics in the near wall region of turbulent boundary layers,(2) to observe and quantify differences between 2D and 3D TBL flow structures, and (3) to document Reynolds number effects for 3D TBLs. The LDV data have provided results detailing the turbulence structure of the 2D and 3D TBLs. These results include mean Reynolds stress distributions, flow skewing results, and U and V spectra. Effects of Reynolds number for the 3D flow were also examined. Comparison to results with the same 3D flow geometry but at a significantly higher Reynolds number provided unique insight into the structure of 3D TBLs. While the 3D mean and fluctuating velocities were found to be highly dependent on Reynolds number, a previously defined shear stress parameter was discovered to be invariant with Reynolds number. The hydrogen bubble technique was used as a flow visualization tool to examine the near-wall flow structure of 2D and 3D TBLs. Both the quantitative and qualitative results displayed larger turbulent fluctuations with more highly concentrated vorticity regions for the 2D flow.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rubel, Oliver; Loring, Burlen; Vay, Jean -Luc

    The generation of short pulses of ion beams through the interaction of an intense laser with a plasma sheath offers the possibility of compact and cheaper ion sources for many applications--from fast ignition and radiography of dense targets to hadron therapy and injection into conventional accelerators. To enable the efficient analysis of large-scale, high-fidelity particle accelerator simulations using the Warp simulation suite, the authors introduce the Warp In situ Visualization Toolkit (WarpIV). WarpIV integrates state-of-the-art in situ visualization and analysis using VisIt with Warp, supports management and control of complex in situ visualization and analysis workflows, and implements integrated analyticsmore » to facilitate query- and feature-based data analytics and efficient large-scale data analysis. WarpIV enables for the first time distributed parallel, in situ visualization of the full simulation data using high-performance compute resources as the data is being generated by Warp. The authors describe the application of WarpIV to study and compare large 2D and 3D ion accelerator simulations, demonstrating significant differences in the acceleration process in 2D and 3D simulations. WarpIV is available to the public via https://bitbucket.org/berkeleylab/warpiv. The Warp In situ Visualization Toolkit (WarpIV) supports large-scale, parallel, in situ visualization and analysis and facilitates query- and feature-based analytics, enabling for the first time high-performance analysis of large-scale, high-fidelity particle accelerator simulations while the data is being generated by the Warp simulation suite. Furthermore, this supplemental material https://extras.computer.org/extra/mcg2016030022s1.pdf provides more details regarding the memory profiling and optimization and the Yee grid recentering optimization results discussed in the main article.« less

  12. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  13. Interactive displays in medical art

    NASA Technical Reports Server (NTRS)

    Mcconathy, Deirdre Alla; Doyle, Michael

    1989-01-01

    Medical illustration is a field of visual communication with a long history. Traditional medical illustrations are static, 2-D, printed images; highly realistic depictions of the gross morphology of anatomical structures. Today medicine requires the visualization of structures and processes that have never before been seen. Complex 3-D spatial relationships require interpretation from 2-D diagnostic imagery. Pictures that move in real time have become clinical and research tools for physicians. Medical illustrators are involved with the development of interactive visual displays for three different, but not discrete, functions: as educational materials, as clinical and research tools, and as data bases of standard imagery used to produce visuals. The production of interactive displays in the medical arts is examined.

  14. Design and Implementation of a Lunar Communications Satellite and Server for the 2012 SISO Smackdown

    NASA Technical Reports Server (NTRS)

    Bulgatz, Dennis; Heater, Daniel; O'Neal, Daniel A.; Norris, Bryan; Schricker, Bradley C.

    2012-01-01

    Last year, the Simulation Interoperability Standards Organization (SISO) inaugurated the now annual High Level Architecture (HLA) Smackdown at the Spring Simulation Interoperability Workshop (SIW). A primary objective of the Smackdown event is to provide college students with hands-on experience in the High Level Architecture (HLA). The University of Alabama in Huntsville (UAHuntsville) fielded teams in 2011 and 2012. Both the 2011 and 2012 smackdown scenarios were a lunar resupply mission. The 2012 UAHuntsville fielded four federates: a communications network Federate called Lunar Communications and Navigation Satellite Service (LCANServ) for sending and receiving messages, a Lunar Satellite Constellation (LCANSat) to put in place radios needed by the communications network for Line-Of-Sight communication calculations, and 3D graphical displays of the orbiting satellites and a 3D visualization of the lunar surface activities. This paper concentrates on the first two federates by describing the functions, algorithms, the modular FOM, experiences, lessons learned and recommendations for future Smackdown events.

  15. GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.

    PubMed

    Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L

    2015-10-15

    A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.

  16. Multimodality optical imaging of embryonic heart microstructure

    PubMed Central

    Yelin, Ronit; Yelin, Dvir; Oh, Wang-Yuhl; Yun, Seok H.; Boudoux, Caroline; Vakoc, Benjamin J.; Bouma, Brett E.; Tearney, Guillermo J.

    2009-01-01

    Study of developmental heart defects requires the visualization of the microstructure and function of the embryonic myocardium, ideally with minimal alterations to the specimen. We demonstrate multiple endogenous contrast optical techniques for imaging the Xenopus laevis tadpole heart. Each technique provides distinct and complementary imaging capabilities, including: 1. 3-D coherence microscopy with subcellular (1 to 2 µm) resolution in fixed embryos, 2. real-time reflectance confocal microscopy with large penetration depth in vivo, and 3. ultra-high speed (up to 900 frames per second) that enables real-time 4-D high resolution imaging in vivo. These imaging modalities can provide a comprehensive picture of the morphologic and dynamic phenotype of the embryonic heart. The potential of endogenous-contrast optical microscopy is demonstrated for investigation of the teratogenic effects of ethanol. Microstructural abnormalities associated with high levels of ethanol exposure are observed, including compromised heart looping and loss of ventricular trabecular mass. PMID:18163837

  17. Multimodality optical imaging of embryonic heart microstructure.

    PubMed

    Yelin, Ronit; Yelin, Dvir; Oh, Wang-Yuhl; Yun, Seok H; Boudoux, Caroline; Vakoc, Benjamin J; Bouma, Brett E; Tearney, Guillermo J

    2007-01-01

    Study of developmental heart defects requires the visualization of the microstructure and function of the embryonic myocardium, ideally with minimal alterations to the specimen. We demonstrate multiple endogenous contrast optical techniques for imaging the Xenopus laevis tadpole heart. Each technique provides distinct and complementary imaging capabilities, including: 1. 3-D coherence microscopy with subcellular (1 to 2 microm) resolution in fixed embryos, 2. real-time reflectance confocal microscopy with large penetration depth in vivo, and 3. ultra-high speed (up to 900 frames per second) that enables real-time 4-D high resolution imaging in vivo. These imaging modalities can provide a comprehensive picture of the morphologic and dynamic phenotype of the embryonic heart. The potential of endogenous-contrast optical microscopy is demonstrated for investigation of the teratogenic effects of ethanol. Microstructural abnormalities associated with high levels of ethanol exposure are observed, including compromised heart looping and loss of ventricular trabecular mass.

  18. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  19. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  20. A survey of visually induced symptoms and associated factors in spectators of three dimensional stereoscopic movies

    PubMed Central

    2012-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) computer generated images has raised concern about image safety and possible side effects on population health. This study aims to (1) quantify the occurrence of visually induced symptoms suffered by the spectators during and after viewing a commercial 3D movie and (2) to assess individual and environmental factors associated to those symptoms. Methods A cross-sectional survey was carried out using a paper based, self administered questionnaire. The questionnaire includes individual and movie characteristics and selected visually induced symptoms (tired eyes, double vision, headache, dizziness, nausea and palpitations). Symptoms were queried at 3 different times: during, right after and after 2 hours from the movie. Results We collected 953 questionnaires. In our sample, 539 (60.4%) individuals reported 1 or more symptoms during the movie, 392 (43.2%) right after and 139 (15.3%) at 2 hours from the movie. The most frequently reported symptoms were tired eyes (during the movie by 34.8%, right after by 24.0%, after 2 hours by 5.7% of individuals) and headache (during the movie by 13.7%, right after by 16.8%, after 2 hours by 8.3% of individuals). Individual history for frequent headache was associated with tired eyes (OR = 1.34, 95%CI = 1.01-1.79), double vision (OR = 1.96; 95%CI = 1.13-3.41), headache (OR = 2.09; 95%CI = 1.41-3.10) during the movie and of headache after the movie (OR = 1.64; 95%CI = 1.16-2.32). Individual susceptibility to car sickness, dizziness, anxiety level, movie show time, animation 3D movie were also associated to several other symptoms. Conclusions The high occurrence of visually induced symptoms resulting from this survey suggests the need of raising public awareness on possible discomfort that susceptible individuals may suffer during and after the vision of 3D movies. PMID:22974235

  1. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    This contribution promotes 3D photography as an important tool to obtain objective object information. Keeping mainly in mind World Heritage documentation as well as Heritage protection, it is another intention of this paper, to stimulate the interest in applications of 3D photography for professionals as well as for amateurs. In addition this is also an activity report of the international CIPA task group 3. The main part of this paper starts with "Digging the treasure of existing international 3D photography". This does not only belong to tangible but also to intangible Cultural Heritage. 3D photography clearly supports the recording, the visualization, the preservation and the restoration of architectural and archaeological objects. Therefore the use of 3D photography in C.H. should increase on an international level. The presented samples in 3D represent a voluminous, almost partly "forgotten treasure" of international archives for 3D photography. The next chapter is on "Promoting new 3D photography in Cultural Heritage". Though 3D photographs are a well-established basic photographic and photogrammetric tool, even suited to provide "near real" documentation, they are still a matter of research and improvement. Beside the use of 3D cameras even single lenses cameras are very much suited for photographic 3D documentation purposes in Cultural Heritage. Currently at the Faculty of Civil Engineering of the University of Applied Sciences Magdeburg-Stendal, low altitude aerial photography is exposed from a maximum height of 13m, using a hand hold carbon telescope rod. The use of this "huge selfie stick" is also an (international) recommendation, to expose high resolution 3D photography of monuments under expedition conditions. In addition to the carbon rod recently a captive balloon and a hexacopter UAV- platform is in use, mainly to take better synoptically (extremely low altitude, ground truth) aerial photography. Additional experiments with respect to "easy geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  2. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  3. Investigation of MR scanning, image registration, and image processing techniques to visualize cortical veins for neurosurgery

    NASA Astrophysics Data System (ADS)

    Noordmans, Herke J.; Rutten, G. J. M.; Willems, Peter W. A.; Hoogduin, J.; Viergever, Max A.

    2001-01-01

    The visualization of brain vessels on the cortex helps the neurosurgeon in two ways: To avoid blood vessels when specifying the trepanation entry, and to overcome errors in the surgical navigation system due to brain shift. We compared 3D T1 MR, 3D T1 MR with gadolinium contrast, MR venography and MR phase contrast angiography as scanning techniques, mutual information as registration technique, and thresholding and multi-vessel enhancement as image processing techniques. We evaluated the volume rendered results based on their quality and correspondence with photos took during surgery. It appears that with 3D T1 MR scans, gadolinium is required to show cortical veins. The visibility of small cortical veins is strongly enhanced by subtracting a 3D T1 MR baseline scan, which should be registered to the scan with gadolinium contrast, even when the scans are made during the same session. Multi-vessel enhancement helps to clarify the view on small vessels by reducing the noise level, but strikingly does not reveal more. MR venography does show intracerebral veins with high detail, but is, as is, unsuited to show cortical veins due to the low contrast with CSF. MR phase contrast angiography can perform equally well as the subtraction technique, but its quality seems to show more inter-patient variability.

  4. Analysis of thin baked-on silicone layers by FTIR and 3D-Laser Scanning Microscopy.

    PubMed

    Funke, Stefanie; Matilainen, Julia; Nalenz, Heiko; Bechtold-Peters, Karoline; Mahler, Hanns-Christian; Friess, Wolfgang

    2015-10-01

    Pre-filled syringes (PFS) and auto-injection devices with cartridges are increasingly used for parenteral administration. To assure functionality, silicone oil is applied to the inner surface of the glass barrel. Silicone oil migration into the product can be minimized by applying a thin but sufficient layer of silicone oil emulsion followed by thermal bake-on versus spraying-on silicone oil. Silicone layers thicker than 100nm resulting from regular spray-on siliconization can be characterized using interferometric profilometers. However, the analysis of thin silicone layers generated by bake-on siliconization is more challenging. In this paper, we have evaluated Fourier transform infrared (FTIR) spectroscopy after solvent extraction and a new 3D-Laser Scanning Microscopy (3D-LSM) to overcome this challenge. A multi-step solvent extraction and subsequent FTIR spectroscopy enabled to quantify baked-on silicone levels as low as 21-325μg per 5mL cartridge. 3D-LSM was successfully established to visualize and measure baked-on silicone layers as thin as 10nm. 3D-LSM was additionally used to analyze the silicone oil distribution within cartridges at such low levels. Both methods provided new, highly valuable insights to characterize the siliconization after processing, in order to achieve functionality. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  6. Does 3D produce more symptoms of visually induced motion sickness?

    PubMed

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  7. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  8. Ray Casting of Large Multi-Resolution Volume Datasets

    NASA Astrophysics Data System (ADS)

    Lux, C.; Fröhlich, B.

    2009-04-01

    High quality volume visualization through ray casting on graphics processing units (GPU) has become an important approach for many application domains. We present a GPU-based, multi-resolution ray casting technique for the interactive visualization of massive volume data sets commonly found in the oil and gas industry. Large volume data sets are represented as a multi-resolution hierarchy based on an octree data structure. The original volume data is decomposed into small bricks of a fixed size acting as the leaf nodes of the octree. These nodes are the highest resolution of the volume. Coarser resolutions are represented through inner nodes of the hierarchy which are generated by down sampling eight neighboring nodes on a finer level. Due to limited memory resources of current desktop workstations and graphics hardware only a limited working set of bricks can be locally maintained for a frame to be displayed. This working set is chosen to represent the whole volume at different local resolution levels depending on the current viewer position, transfer function and distinct areas of interest. During runtime the working set of bricks is maintained in CPU- and GPU memory and is adaptively updated by asynchronously fetching data from external sources like hard drives or a network. The CPU memory hereby acts as a secondary level cache for these sources from which the GPU representation is updated. Our volume ray casting algorithm is based on a 3D texture-atlas in GPU memory. This texture-atlas contains the complete working set of bricks of the current multi-resolution representation of the volume. This enables the volume ray casting algorithm to access the whole working set of bricks through only a single 3D texture. For traversing rays through the volume, information about the locations and resolution levels of visited bricks are required for correct compositing computations. We encode this information into a small 3D index texture which represents the current octree subdivision on its finest level and spatially organizes the bricked data. This approach allows us to render a bricked multi-resolution volume data set utilizing only a single rendering pass with no loss of compositing precision. In contrast most state-of-the art volume rendering systems handle the bricked data as individual 3D textures, which are rendered one at a time while the results are composited into a lower precision frame buffer. Furthermore, our method enables us to integrate advanced volume rendering techniques like empty-space skipping, adaptive sampling and preintegrated transfer functions in a very straightforward manner with virtually no extra costs. Our interactive volume ray tracing implementation allows high quality visualizations of massive volume data sets of tens of Gigabytes in size on standard desktop workstations.

  9. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  10. Three-dimensional visualization system as an aid for facial surgical planning

    NASA Astrophysics Data System (ADS)

    Barre, Sebastien; Fernandez-Maloigne, Christine; Paume, Patricia; Subrenat, Gilles

    2001-05-01

    We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.

  11. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  12. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team`s ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less

  13. 3-D visualisation and interpretation of seismic attributes extracted from large 3-D seismic datasets: Subregional and prospect evaluation, deepwater Nigeria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sola, M.; Haakon Nordby, L.; Dailey, D.V.

    High resolution 3-D visualization of horizon interpretation and seismic attributes from large 3-D seismic surveys in deepwater Nigeria has greatly enhanced the exploration team's ability to quickly recognize prospective segments of subregional and prospect specific scale areas. Integrated workstation generated structure, isopach and extracted horizon consistent, interval and windowed attributes are particularly useful in illustrating the complex structural and stratigraphical prospectivity of deepwater Nigeria. Large 3-D seismic volumes acquired over 750 square kilometers can be manipulated within the visualization system with attribute tracking capability that allows for real time data interrogation and interpretation. As in classical seismic stratigraphic studies, patternmore » recognition is fundamental to effective depositions facies interpretation and reservoir model construction. The 3-D perspective enhances the data interpretation through clear representation of relative scale, spatial distribution and magnitude of attributes. In deepwater Nigeria, many prospective traps rely on an interplay between syndepositional structure and slope turbidite depositional systems. Reservoir systems in many prospects appear to be dominated by unconfined to moderately focused slope feeder channel facies. These units have spatially complex facies architecture with feeder channel axes separated by extensive interchannel areas. Structural culminations generally have a history of initial compressional folding with late in extensional collapse and accommodation faulting. The resulting complex trap configurations often have stacked reservoirs over intervals as thick as 1500 meters. Exploration, appraisal and development scenarios in these settings can be optimized by taking full advantage of integrating high resolution 3-D visualization and seismic workstation interpretation.« less

  14. Effects of refractive errors on visual evoked magnetic fields.

    PubMed

    Suzuki, Masaya; Nagae, Mizuki; Nagata, Yuko; Kumagai, Naoya; Inui, Koji; Kakigi, Ryusuke

    2015-11-09

    The latency and amplitude of visual evoked cortical responses are known to be affected by refractive states, suggesting that they may be used as an objective index of refractive errors. In order to establish an easy and reliable method for this purpose, we herein examined the effects of refractive errors on visual evoked magnetic fields (VEFs). Binocular VEFs following the presentation of a simple grating of 0.16 cd/m(2) in the lower visual field were recorded in 12 healthy volunteers and compared among four refractive states: 0D, +1D, +2D, and +4D, by using plus lenses. The low-luminance visual stimulus evoked a main MEG response at approximately 120 ms (M100) that reversed its polarity between the upper and lower visual field stimulations and originated from the occipital midline area. When refractive errors were induced by plus lenses, the latency of M100 increased, while its amplitude decreased with an increase in power of the lens. Differences from the control condition (+0D) were significant for all three lenses examined. The results of dipole analyses showed that evoked fields for the control (+0D) condition were explainable by one dipole in the primary visual cortex (V1), while other sources, presumably in V3 or V6, slightly contributed to shape M100 for the +2D or +4D condition. The present results showed that the latency and amplitude of M100 are both useful indicators for assessing refractive states. The contribution of neural sources other than V1 to M100 was modest under the 0D and +1D conditions. By considering the nature of the activity of M100 including its high sensitivity to a spatial frequency and lower visual field dominance, a simple low-luminance grating stimulus at an optimal spatial frequency in the lower visual field appears appropriate for obtaining data on high S/N ratios and reducing the load on subjects.

  15. The design of 3D scaffold for tissue engineering using automated scaffold design algorithm.

    PubMed

    Mahmoud, Shahenda; Eldeib, Ayman; Samy, Sherif

    2015-06-01

    Several progresses have been introduced in the field of bone regenerative medicine. A new term tissue engineering (TE) was created. In TE, a highly porous artificial extracellular matrix or scaffold is required to accommodate cells and guide their growth in three dimensions. The design of scaffolds with desirable internal and external structure represents a challenge for TE. In this paper, we introduce a new method known as automated scaffold design (ASD) for designing a 3D scaffold with a minimum mismatches for its geometrical parameters. The method makes use of k-means clustering algorithm to separate the different tissues and hence decodes the defected bone portions. The segmented portions of different slices are registered to construct the 3D volume for the data. It also uses an isosurface rendering technique for 3D visualization of the scaffold and bones. It provides the ability to visualize the transplanted as well as the normal bone portions. The proposed system proves good performance in both the segmentation results and visualizations aspects.

  16. Exploring direct 3D interaction for full horizontal parallax light field displays using leap motion controller.

    PubMed

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-04-14

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work.

  17. Terrestrial laser scanning and a degenerated cylinder model to determine gross morphological change of cadavers under conditions of natural decomposition.

    PubMed

    Zhang, Xiao; Glennie, Craig L; Bucheli, Sibyl R; Lindgren, Natalie K; Lynne, Aaron M

    2014-08-01

    Decomposition can be a highly variable process with stages that are difficult to quantify. Using high accuracy terrestrial laser scanning a repeated three-dimensional (3D) documentation of volumetric changes of a human body during early decomposition is recorded. To determine temporal volumetric variations as well as 3D distribution of the changed locations in the body over time, this paper introduces the use of multiple degenerated cylinder models to provide a reasonable approximation of body parts against which 3D change can be measured and visualized. An iterative closest point algorithm is used for 3D registration, and a method for determining volumetric change is presented. Comparison of the laser scanning estimates of volumetric change shows good agreement with repeated in-situ measurements of abdomen and limb circumference that were taken diurnally. The 3D visualizations of volumetric changes demonstrate that bloat is a process with a beginning, middle, and end rather than a state of presence or absence. Additionally, the 3D visualizations show conclusively that cadaver bloat is not isolated to the abdominal cavity, but also occurs in the limbs. Detailed quantification of the bloat stage of decay has the potential to alter how the beginning and end of bloat are determined by researchers and can provide further insight into the effects of the ecosystem on decomposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  18. Chemistry of wood in 3D: new infrared imaging

    Treesearch

    Barbara L. Illman; Julia Sedlmair; Miriam Unger; Casey Crooks; Marli Oliveira; Carol Hirschmugl

    2015-01-01

    Chemical detection, mapping and imaging in three dimensions will help refine our understanding of wood properties and durability. We describe here a pioneering infrared method to create visual 3D images of the chemicals in wood, providing for the first time, spatial and architectural information at the cellular level without liquid extraction or prior fixation....

  19. How visual attention is modified by disparities and textures changes?

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérome; Wyckens, Emmanuel; Le Meur, Olivier

    2013-03-01

    The 3D image/video quality of experience is a multidimensional concept that depends on 2D image quality, depth quantity and visual comfort. The relationship between these parameters is not yet clearly defined. From this perspective, we aim to understand how texture complexity, depth quantity and visual comfort influence the way people observe 3D content in comparison with 2D. Six scenes with different structural parameters were generated using Blender software. For these six scenes, the following parameters were modified: texture complexity and the amount of depth changing the camera baseline and the convergence distance at the shooting side. Our study was conducted using an eye-tracker and a 3DTV display. During the eye-tracking experiment, each observer freely examined images with different depth levels and texture complexities. To avoid memory bias, we ensured that each observer had only seen scene content once. Collected fixation data were used to build saliency maps and to analyze differences between 2D and 3D conditions. Our results show that the introduction of disparity shortened saccade length; however fixation durations remained unaffected. An analysis of the saliency maps did not reveal any differences between 2D and 3D conditions for the viewing duration of 20 s. When the whole period was divided into smaller intervals, we found that for the first 4 s the introduced disparity was conducive to the section of saliency regions. However, this contribution is quite minimal if the correlation between saliency maps is analyzed. Nevertheless, we did not find that discomfort (comfort) had any influence on visual attention. We believe that existing metrics and methods are depth insensitive and do not reveal such differences. Based on the analysis of heat maps and paired t-tests of inter-observer visual congruency values we deduced that the selected areas of interest depend on texture complexities.

  20. Contextual cueing in 3D visual search depends on representations in planar-, not depth-defined space.

    PubMed

    Zang, Xuelian; Shi, Zhuanghua; Müller, Hermann J; Conci, Markus

    2017-05-01

    Learning of spatial inter-item associations can speed up visual search in everyday life, an effect referred to as contextual cueing (Chun & Jiang, 1998). Whereas previous studies investigated contextual cueing primarily using 2D layouts, the current study examined how 3D depth influences contextual learning in visual search. In two experiments, the search items were presented evenly distributed across front and back planes in an initial training session. In the subsequent test session, the search items were either swapped between the front and back planes (Experiment 1) or between the left and right halves (Experiment 2) of the displays. The results showed that repeated spatial contexts were learned efficiently under 3D viewing conditions, facilitating search in the training sessions, in both experiments. Importantly, contextual cueing remained robust and virtually unaffected following the swap of depth planes in Experiment 1, but it was substantially reduced (to nonsignificant levels) following the left-right side swap in Experiment 2. This result pattern indicates that spatial, but not depth, inter-item variations limit effective contextual guidance. Restated, contextual cueing (even under 3D viewing conditions) is primarily based on 2D inter-item associations, while depth-defined spatial regularities are probably not encoded during contextual learning. Hence, changing the depth relations does not impact the cueing effect.

  1. Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) is the software tool for use in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight-code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover-predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (IDD) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities. Thus, RSVP, being highly data driven, may be tailored to other missions with minimal effort. In addition, RSVP uses a distributed, message-passing architecture to allow multitasking, and collaborative visualization and sequence development by scattered team members.

  2. 3D second harmonic generation imaging tomography by multi-view excitation

    PubMed Central

    Campbell, Kirby R.; Wen, Bruce; Shelton, Emily M.; Swader, Robert; Cox, Benjamin L.; Eliceiri, Kevin; Campagnola, Paul J.

    2018-01-01

    Biological tissues have complex 3D collagen fiber architecture that cannot be fully visualized by conventional second harmonic generation (SHG) microscopy due to electric dipole considerations. We have developed a multi-view SHG imaging platform that successfully visualizes all orientations of collagen fibers. This is achieved by rotating tissues relative to the excitation laser plane of incidence, where the complete fibrillar structure is then visualized following registration and reconstruction. We evaluated high frequency and Gaussian weighted fusion reconstruction algorithms, and found the former approach performs better in terms of the resulting resolution. The new approach is a first step toward SHG tomography. PMID:29541654

  3. [The relationship between eyeball structure and visual acuity in high myopia].

    PubMed

    Liu, Yi-Chang; Xia, Wen-Tao; Zhu, Guang-You; Zhou, Xing-Tao; Fan, Li-Hua; Liu, Rui-Jue; Chen, Jie-Min

    2010-06-01

    To explore the relationship between eyeball structure and visual acuity in high myopia. Totally, 152 people (283 eyeballs) with different levels of myopia were tested for visual acuity, axial length, and fundus. All cases were classified according to diopter, axial length, and fundus. The relationships between diopter, axial length, fundus and visual acuity were studied. The mathematical models were established for visual acuity and eyeball structure markers. The visual acuity showed a moderate correlation with fundus class, comus, axial length and diopter ([r] > 0.4, P < 0.000 1). The visual acuity in people with the axial length longer than 30.00 mm, diopter above -20.00 D and fundus in 4th class were mostly below 0.5. The mathematical models were established by visual acuity and eyeball structure markers. The visual acuity should decline with axial length extension, diopter deepening and pathological deterioration of fundus. To detect the structure changes by combining different kinds of objective methods can help to assess and to judge the vision in high myopia.

  4. A systematized WYSIWYG pipeline for digital stereoscopic 3D filmmaking

    NASA Astrophysics Data System (ADS)

    Mueller, Robert; Ward, Chris; Hušák, Michal

    2008-02-01

    Digital tools are transforming stereoscopic 3D content creation and delivery, creating an opportunity for the broad acceptance and success of stereoscopic 3D films. Beginning in late 2005, a series of mostly CGI features has successfully initiated the public to this new generation of highly-comfortable, artifact-free digital 3D. While the response has been decidedly favorable, a lack of high-quality live-action films could hinder long-term success. Liveaction stereoscopic films have historically been more time-consuming, costly, and creatively-limiting than 2D films - thus a need arises for a live-action 3D filmmaking process which minimizes such limitations. A unique 'systematized' what-you-see-is-what-you-get (WYSIWYG) pipeline is described which allows the efficient, intuitive and accurate capture and integration of 3D and 2D elements from multiple shoots and sources - both live-action and CGI. Throughout this pipeline, digital tools utilize a consistent algorithm to provide meaningful and accurate visual depth references with respect to the viewing audience in the target theater environment. This intuitive, visual approach introduces efficiency and creativity to the 3D filmmaking process by eliminating both the need for a 'mathematician mentality' of spreadsheets and calculators, as well as any trial and error guesswork, while enabling the most comfortable, 'pixel-perfect', artifact-free 3D product possible.

  5. Comparison of visual acuity of the patients on the first day after sub-Bowman keratomileusis or laser in situ keratomileusis

    PubMed Central

    Zhao, Wei; Wu, Ting; Dong, Ze-Hong; Feng, Jie; Ren, Yu-Feng; Wang, Yu-Sheng

    2016-01-01

    AIM To compare recovery of the visual acuity in patients one day after sub-Bowman keratomileusis (SBK) or laser in situ keratomileusis (LASIK). METHODS Data from 5923 eyes in 2968 patients that received LASIK (2755 eyes) or SBK (3168 eyes) were retrospectively analyzed. The eyes were divided into 4 groups according to preoperative spherical equivalent: between -12.00 to -9.00 D, extremely high myopia (n=396, including 192 and 204 in SBK and LASIK groups, respectively); -9.00 to -6.00 D, high myopia (n=1822, including 991 and 831 in SBK and LASIK groups, respectively), -6.00 to -3.00 D, moderate myopia (n=3071, including 1658 and 1413 in SBK and LASIK groups, respectively), and -3.00 to 0.00 D, low myopia (n=634, including 327 and 307 in SBK and LASIK groups, respectively). Uncorrected logMAR visual acuity values of patients were assessed under standard natural light. Analysis of variance was used for comparisons among different groups. RESULTS Uncorrected visual acuity values were 0.0115±0.1051 and 0.0466±0.1477 at day 1 after operation for patients receiving SBK and LASIK, respectively (P<0.01); visual acuity values of 0.1854±0.1842, 0.0615±0.1326, -0.0033±0.0978, and -0.0164±0.0972 were obtained for patients in the extremely high, high, moderate, and low myopia groups, respectively (P<0.01). In addition, significant differences in visual acuity at day 1 after operation were found between patients receiving SBK and LASIK in each myopia subgroup. CONCLUSION Compared with LASIK, SBK is safer and more effective, with faster recovery. Therefore, SBK is more likely to be accepted by patients than LASIK for better uncorrected visual acuity the day following operation. PMID:27158619

  6. Proof of concept : examining characteristics of roadway infrastructure in various 3D visualization modes.

    DOT National Transportation Integrated Search

    2015-02-01

    Utilizing enhanced visualization in transportation planning and design gained popularity in the last decade. This work aimed at : demonstrating the concept of utilizing a highly immersive, virtual reality simulation engine for creating dynamic, inter...

  7. Planning, implementation and optimization of future space missions using an immersive visualization environment (IVE) machine

    NASA Astrophysics Data System (ADS)

    Nathan Harris, E.; Morgenthaler, George W.

    2004-07-01

    Beginning in 1995, a team of 3-D engineering visualization experts assembled at the Lockheed Martin Space Systems Company and began to develop innovative virtual prototyping simulation tools for performing ground processing and real-time visualization of design and planning of aerospace missions. At the University of Colorado, a team of 3-D visualization experts also began developing the science of 3-D visualization and immersive visualization at the newly founded British Petroleum (BP) Center for visualization, which began operations in October, 2001. BP acquired ARCO in the year 2000 and awarded the 3-D flexible IVE developed by ARCO (beginning in 1990) to the University of Colorado, CU, the winner in a competition among 6 Universities. CU then hired Dr. G. Dorn, the leader of the ARCO team as Center Director, and the other experts to apply 3-D immersive visualization to aerospace and to other University Research fields, while continuing research on surface interpretation of seismic data and 3-D volumes. This paper recounts further progress and outlines plans in Aerospace applications at Lockheed Martin and CU.

  8. Software applications to three-dimensional visualization of forest landscapes -- A case study demontrating the use of visual nature studio (VNS) in visualizing fire spread in forest landscapes

    Treesearch

    Brian J. Williams; Bo Song; Chou Chiao-Ying; Thomas M. Williams; John Hom

    2010-01-01

    Three-dimensional (3D) visualization is a useful tool that depicts virtual forest landscapes on computer. Previous studies in visualization have required high end computer hardware and specialized technical skills. A virtual forest landscape can be used to show different effects of disturbances and management scenarios on a computer, which allows observation of forest...

  9. Fast 3D Surface Extraction 2 pages (including abstract)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sewell, Christopher Meyer; Patchett, John M.; Ahrens, James P.

    Ocean scientists searching for isosurfaces and/or thresholds of interest in high resolution 3D datasets required a tedious and time-consuming interactive exploration experience. PISTON research and development activities are enabling ocean scientists to rapidly and interactively explore isosurfaces and thresholds in their large data sets using a simple slider with real time calculation and visualization of these features. Ocean Scientists can now visualize more features in less time, helping them gain a better understanding of the high resolution data sets they work with on a daily basis. Isosurface timings (512{sup 3} grid): VTK 7.7 s, Parallel VTK (48-core) 1.3 s, PISTONmore » OpenMP (48-core) 0.2 s, PISTON CUDA (Quadro 6000) 0.1 s.« less

  10. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  11. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  12. Integral modeling of human eyes: from anatomy to visual response

    NASA Astrophysics Data System (ADS)

    Navarro, Rafael

    2006-02-01

    Three basic stages towards the global modeling of the eye are presented. In the first stage, an adequate choice of the basis geometrical model, general ellipsoid in this case, permits, to fit in a natural way the typical "melon" shape of the cornea with minimum complexity. In addition it facilitates to extract most of its optically relevant parameters, such as the position and orientation of it optical axis in the 3D space, the paraxial and overall refractive power, the amount and axis of astigmatism, etc. In the second level, this geometrical model, along with optical design and optimization tools, is applied to build customized optical models of individual eyes, able to reproduce the measured wave aberration with high fidelity. Finally, we put together a sequence of schematic, but functionally realistic models of the different stages of image acquisition, coding and analysis in the visual system, along with a probabilistic Bayesian maximum a posteriori identification approach. This permitted us to build a realistic simulation of the all the essential processes involved in a visual acuity clinical exam. It is remarkable that at all three levels, it has been possible for the models to predict the experimental data with high accuracy.

  13. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  14. 3D Visualization as a Communicative Aid in Pharmaceutical Advice-Giving over Distance

    PubMed Central

    Dahlbäck, Nils; Petersson, Göran Ingemar

    2011-01-01

    Background Medication misuse results in considerable problems for both patient and society. It is a complex problem with many contributing factors, including timely access to product information. Objective To investigate the value of 3-dimensional (3D) visualization paired with video conferencing as a tool for pharmaceutical advice over distance in terms of accessibility and ease of use for the advice seeker. Methods We created a Web-based communication service called AssistancePlus that allows an advisor to demonstrate the physical handling of a complex pharmaceutical product to an advice seeker with the aid of 3D visualization and audio/video conferencing. AssistancePlus was tested in 2 separate user studies performed in a usability lab, under realistic settings and emulating a real usage situation. In the first study, 10 pharmacy students were assisted by 2 advisors from the Swedish National Co-operation of Pharmacies’ call centre on the use of an asthma inhaler. The student-advisor interview sessions were filmed on video to qualitatively explore their experience of giving and receiving advice with the aid of 3D visualization. In the second study, 3 advisors from the same call centre instructed 23 participants recruited from the general public on the use of 2 products: (1) an insulin injection pen, and (2) a growth hormone injection syringe. First, participants received advice on one product in an audio-recorded telephone call and for the other product in a video-recorded AssistancePlus session (product order balanced). In conjunction with the AssistancePlus session, participants answered a questionnaire regarding accessibility, perceived expressiveness, and general usefulness of 3D visualization for advice-giving over distance compared with the telephone and were given a short interview focusing on their experience of the 3D features. Results In both studies, participants found the AssistancePlus service helpful in providing clear and exact instructions. In the second study, directly comparing AssistancePlus and the telephone, AssistancePlus was judged positively for ease of communication (P = .001), personal contact (P = .001), explanatory power (P < .001), and efficiency (P < .001). Participants in both studies said that they would welcome this type of service as an alternative to the telephone and to face-to-face interaction when a physical meeting is not possible or not convenient. However, although AssistancePlus was considered as easy to use as the telephone, they would choose AssistancePlus over the telephone only when the complexity of the question demanded the higher level of expressiveness it offers. For simpler questions, a simpler service was preferred. Conclusions 3D visualization paired with video conferencing can be useful for advice-giving over distance, specifically for issues that require a higher level of communicative expressiveness than the telephone can offer. 3D-supported advice-giving can increase the range of issues that can be handled over distance and thus improve access to product information. PMID:21771714

  15. Update on Rover Sequencing and Visualization Program

    NASA Technical Reports Server (NTRS)

    Cooper, Brian; Hartman, Frank; Maxwell, Scott; Yen, Jeng; Wright, John; Balacuit, Carlos

    2005-01-01

    The Rover Sequencing and Visualization Program (RSVP) has been updated. RSVP was reported in Rover Sequencing and Visualization Program (NPO-30845), NASA Tech Briefs, Vol. 29, No. 4 (April 2005), page 38. To recapitulate: The Rover Sequencing and Visualization Program (RSVP) is the software tool to be used in the Mars Exploration Rover (MER) mission for planning rover operations and generating command sequences for accomplishing those operations. RSVP combines three-dimensional (3D) visualization for immersive exploration of the operations area, stereoscopic image display for high-resolution examination of the downlinked imagery, and a sophisticated command-sequence editing tool for analysis and completion of the sequences. RSVP is linked with actual flight code modules for operations rehearsal to provide feedback on the expected behavior of the rover prior to committing to a particular sequence. Playback tools allow for review of both rehearsed rover behavior and downlinked results of actual rover operations. These can be displayed simultaneously for comparison of rehearsed and actual activities for verification. The primary inputs to RSVP are downlink data products from the Operations Storage Server (OSS) and activity plans generated by the science team. The activity plans are high-level goals for the next day s activities. The downlink data products include imagery, terrain models, and telemetered engineering data on rover activities and state. The Rover Sequence Editor (RoSE) component of RSVP performs activity expansion to command sequences, command creation and editing with setting of command parameters, and viewing and management of rover resources. The HyperDrive component of RSVP performs 2D and 3D visualization of the rover s environment, graphical and animated review of rover predicted and telemetered state, and creation and editing of command sequences related to mobility and Instrument Deployment Device (robotic arm) operations. Additionally, RoSE and HyperDrive together evaluate command sequences for potential violations of flight and safety rules. The products of RSVP include command sequences for uplink that are stored in the Distributed Object Manager (DOM) and predicted rover state histories stored in the OSS for comparison and validation of downlinked telemetry. The majority of components comprising RSVP utilize the MER command and activity dictionaries to automatically customize the system for MER activities.

  16. Cognitive/emotional models for human behavior representation in 3D avatar simulations

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-08-01

    Simplified models of human cognition and emotional response are presented which are based on models of auditory/ visual polymodal fusion. At the core of these models is a computational model of Area 37 of the temporal cortex which is based on new isocortex models presented recently by Grossberg. These models are trained using carefully chosen auditory (musical sequences), visual (paintings) and higher level abstract (meta level) data obtained from studies of how optimization strategies are chosen in response to outside managerial inputs. The software modules developed are then used as inputs to character generation codes in standard 3D virtual world simulations. The auditory and visual training data also enable the development of simple music and painting composition generators which significantly enhance one's ability to validate the cognitive model. The cognitive models are handled as interacting software agents implemented as CORBA objects to allow the use of multiple language coding choices (C++, Java, Python etc) and efficient use of legacy code.

  17. Scalable large format 3D displays

    NASA Astrophysics Data System (ADS)

    Chang, Nelson L.; Damera-Venkata, Niranjan

    2010-02-01

    We present a general framework for the modeling and optimization of scalable large format 3-D displays using multiple projectors. Based on this framework, we derive algorithms that can robustly optimize the visual quality of an arbitrary combination of projectors (e.g. tiled, superimposed, combinations of the two) without manual adjustment. The framework creates for the first time a new unified paradigm that is agnostic to a particular configuration of projectors yet robustly optimizes for the brightness, contrast, and resolution of that configuration. In addition, we demonstrate that our algorithms support high resolution stereoscopic video at real-time interactive frame rates achieved on commodity graphics hardware. Through complementary polarization, the framework creates high quality multi-projector 3-D displays at low hardware and operational cost for a variety of applications including digital cinema, visualization, and command-and-control walls.

  18. Using Molecular Visualization to Explore Protein Structure and Function and Enhance Student Facility with Computational Tools

    ERIC Educational Resources Information Center

    Terrell, Cassidy R.; Listenberger, Laura L.

    2017-01-01

    Recognizing that undergraduate students can benefit from analysis of 3D protein structure and function, we have developed a multiweek, inquiry-based molecular visualization project for Biochemistry I students. This project uses a virtual model of cyclooxygenase-1 (COX-1) to guide students through multiple levels of protein structure analysis. The…

  19. [3D-TOF MR-angiography with high spatial resolution for surgical planning in insular lobe gliomas].

    PubMed

    Bykanov, A E; Pitskhelauri, D I; Pronin, I N; Tonoyan, A S; Kornienko, V N; Zakharova, N E; Turkin, A M; Sanikidze, A Z; Shkarubo, M A; Shkatova, A M; Shults, E I

    2015-01-01

    Despite the obvious progress in modern neurosurgery, surgery for glial tumors of the insular lobe is often associated with a high risk of postoperative neurological deficit, which is primarily caused by damage to perforating arteries of the M1 segment of the middle cerebral artery. The work is aimed at evaluating the effectiveness of high resolution time-of-flight (3D-TOF) MR angiography in imaging of medial and lateral lenticulostriate arteries and determining their relationship to tumor edge in patients with gliomas of the insula. 3D-TOF MR angiography data were analyzed in 20 patients with primarily diagnosed cerebral gliomas involving the insula. All patients underwent non-contrast enhanced 3D-TOF MR angiography. In 6 cases, 3D-TOF MRA was performed before and after contrast enhancement. 3D-TOF angiography before intravenous contrast injection was capable of visualizing the medial lenticulostriate arteries in 19 patients (95% of all cases) and lateral lenticulostriate arteries in 18 patients (90% of all cases). Contrast-enhanced 3D-TOF angiography allows for better visualization of both the proximal and distal segments of lenticulostriate arteries. Three variants of relationship between the tumor and lenticulostriate arteries were identified. Variant I: the tumor grew over the arteries without their displacement in 2 cases (10% of the total number of observations); variant II: the tumor caused medial displacement of arteries without growing over them in 11 cases (55% of the total number of observations); variant III: the tumor partially grew over and displaced arteries in 2 cases (10%). In 25% of cases (5 patients), tumor was poorly visualized on 3D-TOF MR angiograms because their signal characteristics did not differ from those of the medulla (tumor tissue was T1 isointense). As a result, it was impossible to determine the relationship between the tumor and lenticulostriate arteries. High spatial resolution time-of-flight MR angiography can be recommended for preoperative imaging of lenticulostriate arteries to plan the extent of neurosurgical resection in patients with glial tumors of the insular lobe.

  20. Comparison of Object Recognition Behavior in Human and Monkey

    PubMed Central

    Rajalingham, Rishi; Schmidt, Kailyn

    2015-01-01

    Although the rhesus monkey is used widely as an animal model of human visual processing, it is not known whether invariant visual object recognition behavior is quantitatively comparable across monkeys and humans. To address this question, we systematically compared the core object recognition behavior of two monkeys with that of human subjects. To test true object recognition behavior (rather than image matching), we generated several thousand naturalistic synthetic images of 24 basic-level objects with high variation in viewing parameters and image background. Monkeys were trained to perform binary object recognition tasks on a match-to-sample paradigm. Data from 605 human subjects performing the same tasks on Mechanical Turk were aggregated to characterize “pooled human” object recognition behavior, as well as 33 separate Mechanical Turk subjects to characterize individual human subject behavior. Our results show that monkeys learn each new object in a few days, after which they not only match mean human performance but show a pattern of object confusion that is highly correlated with pooled human confusion patterns and is statistically indistinguishable from individual human subjects. Importantly, this shared human and monkey pattern of 3D object confusion is not shared with low-level visual representations (pixels, V1+; models of the retina and primary visual cortex) but is shared with a state-of-the-art computer vision feature representation. Together, these results are consistent with the hypothesis that rhesus monkeys and humans share a common neural shape representation that directly supports object perception. SIGNIFICANCE STATEMENT To date, several mammalian species have shown promise as animal models for studying the neural mechanisms underlying high-level visual processing in humans. In light of this diversity, making tight comparisons between nonhuman and human primates is particularly critical in determining the best use of nonhuman primates to further the goal of the field of translating knowledge gained from animal models to humans. To the best of our knowledge, this study is the first systematic attempt at comparing a high-level visual behavior of humans and macaque monkeys. PMID:26338324

  1. 3D curvature of muscle fascicles in triceps surae

    PubMed Central

    Hamarneh, Ghassan; Wakeling, James M.

    2014-01-01

    Muscle fascicles curve along their length, with the curvatures occurring around regions of high intramuscular pressure, and are necessary for mechanical stability. Fascicles are typically considered to lie in fascicle planes that are the planes visualized during dissection or two-dimensional (2D) ultrasound scans. However, it has previously been predicted that fascicles must curve in three-dimensional (3D) and thus the fascicle planes may actually exist as 3D sheets. 3D fascicle curvatures have not been explored in human musculature. Furthermore, if the fascicles do not lie in 2D planes, then this has implications for architectural measures that are derived from 2D ultrasound scans. The purpose of this study was to quantify the 3D curvatures of the muscle fascicles and fascicle sheets within the triceps surae muscles and to test whether these curvatures varied among different contraction levels, muscle length, and regions within the muscle. Six male subjects were tested for three torque levels (0, 30, and 60% maximal voluntary contraction) and four ankle angles (−15, 0, 15, and 30° plantar flexion), and fascicles were imaged using 3D ultrasound techniques. The fascicle curvatures significantly increased at higher ankle torques and shorter muscle lengths. The fascicle sheet curvatures were of similar magnitude to the fascicle curvatures but did not vary between contractions. Fascicle curvatures were regionalized within each muscle with the curvature facing the deeper aponeuroses, and this indicates a greater intramuscular pressure in the deeper layers of muscles. Muscle architectural measures may be in error when using 2D images for complex geometries such as the soleus. PMID:25324510

  2. Improving Semantic Updating Method on 3d City Models Using Hybrid Semantic-Geometric 3d Segmentation Technique

    NASA Astrophysics Data System (ADS)

    Sharkawi, K.-H.; Abdul-Rahman, A.

    2013-09-01

    Cities and urban areas entities such as building structures are becoming more complex as the modern human civilizations continue to evolve. The ability to plan and manage every territory especially the urban areas is very important to every government in the world. Planning and managing cities and urban areas based on printed maps and 2D data are getting insufficient and inefficient to cope with the complexity of the new developments in big cities. The emergence of 3D city models have boosted the efficiency in analysing and managing urban areas as the 3D data are proven to represent the real world object more accurately. It has since been adopted as the new trend in buildings and urban management and planning applications. Nowadays, many countries around the world have been generating virtual 3D representation of their major cities. The growing interest in improving the usability of 3D city models has resulted in the development of various tools for analysis based on the 3D city models. Today, 3D city models are generated for various purposes such as for tourism, location-based services, disaster management and urban planning. Meanwhile, modelling 3D objects are getting easier with the emergence of the user-friendly tools for 3D modelling available in the market. Generating 3D buildings with high accuracy also has become easier with the availability of airborne Lidar and terrestrial laser scanning equipments. The availability and accessibility to this technology makes it more sensible to analyse buildings in urban areas using 3D data as it accurately represent the real world objects. The Open Geospatial Consortium (OGC) has accepted CityGML specifications as one of the international standards for representing and exchanging spatial data, making it easier to visualize, store and manage 3D city models data efficiently. CityGML able to represents the semantics, geometry, topology and appearance of 3D city models in five well-defined Level-of-Details (LoD), namely LoD0 to LoD4. The accuracy and structural complexity of the 3D objects increases with the LoD level where LoD0 is the simplest LoD (2.5D; Digital Terrain Model (DTM) + building or roof print) while LoD4 is the most complex LoD (architectural details with interior structures). Semantic information is one of the main components in CityGML and 3D City Models, and provides important information for any analyses. However, more often than not, the semantic information is not available for the 3D city model due to the unstandardized modelling process. One of the examples is where a building is normally generated as one object (without specific feature layers such as Roof, Ground floor, Level 1, Level 2, Block A, Block B, etc). This research attempts to develop a method to improve the semantic data updating process by segmenting the 3D building into simpler parts which will make it easier for the users to select and update the semantic information. The methodology is implemented for 3D buildings in LoD2 where the buildings are generated without architectural details but with distinct roof structures. This paper also introduces hybrid semantic-geometric 3D segmentation method that deals with hierarchical segmentation of a 3D building based on its semantic value and surface characteristics, fitted by one of the predefined primitives. For future work, the segmentation method will be implemented as part of the change detection module that can detect any changes on the 3D buildings, store and retrieve semantic information of the changed structure, automatically updates the 3D models and visualize the results in a userfriendly graphical user interface (GUI).

  3. AstroCloud: An Agile platform for data visualization and specific analyzes in 2D and 3D

    NASA Astrophysics Data System (ADS)

    Molina, F. Z.; Salgado, R.; Bergel, A.; Infante, A.

    2017-07-01

    Nowadays, astronomers commonly run their own tools, or distributed computational packages, for data analysis and then visualizing the results with generic applications. This chain of processes comes at high cost: (a) analyses are manually applied, they are therefore difficult to be automatized, and (b) data have to be serialized, thus increasing the cost of parsing and saving intermediary data. We are developing AstroCloud, an agile visualization multipurpose platform intended for specific analyses of astronomical images (https://astrocloudy.wordpress.com). This platform incorporates domain-specific languages which make it easily extensible. AstroCloud supports customized plug-ins, which translate into time reduction on data analysis. Moreover, it also supports 2D and 3D rendering, including interactive features in real time. AstroCloud is under development, we are currently implementing different choices for data reduction and physical analyzes.

  4. 5D Modelling: An Efficient Approach for Creating Spatiotemporal Predictive 3D Maps of Large-Scale Cultural Resources

    NASA Astrophysics Data System (ADS)

    Doulamis, A.; Doulamis, N.; Ioannidis, C.; Chrysouli, C.; Grammalidis, N.; Dimitropoulos, K.; Potsiou, C.; Stathopoulou, E.-K.; Ioannides, M.

    2015-08-01

    Outdoor large-scale cultural sites are mostly sensitive to environmental, natural and human made factors, implying an imminent need for a spatio-temporal assessment to identify regions of potential cultural interest (material degradation, structuring, conservation). On the other hand, in Cultural Heritage research quite different actors are involved (archaeologists, curators, conservators, simple users) each of diverse needs. All these statements advocate that a 5D modelling (3D geometry plus time plus levels of details) is ideally required for preservation and assessment of outdoor large scale cultural sites, which is currently implemented as a simple aggregation of 3D digital models at different time and levels of details. The main bottleneck of such an approach is its complexity, making 5D modelling impossible to be validated in real life conditions. In this paper, a cost effective and affordable framework for 5D modelling is proposed based on a spatial-temporal dependent aggregation of 3D digital models, by incorporating a predictive assessment procedure to indicate which regions (surfaces) of an object should be reconstructed at higher levels of details at next time instances and which at lower ones. In this way, dynamic change history maps are created, indicating spatial probabilities of regions needed further 3D modelling at forthcoming instances. Using these maps, predictive assessment can be made, that is, to localize surfaces within the objects where a high accuracy reconstruction process needs to be activated at the forthcoming time instances. The proposed 5D Digital Cultural Heritage Model (5D-DCHM) is implemented using open interoperable standards based on the CityGML framework, which also allows the description of additional semantic metadata information. Visualization aspects are also supported to allow easy manipulation, interaction and representation of the 5D-DCHM geometry and the respective semantic information. The open source 3DCityDB incorporating a PostgreSQL geo-database is used to manage and manipulate 3D data and their semantics.

  5. Augmented Reality in Scientific Publications-Taking the Visualization of 3D Structures to the Next Level.

    PubMed

    Wolle, Patrik; Müller, Matthias P; Rauh, Daniel

    2018-03-16

    The examination of three-dimensional structural models in scientific publications allows the reader to validate or invalidate conclusions drawn by the authors. However, either due to a (temporary) lack of access to proper visualization software or a lack of proficiency, this information is not necessarily available to every reader. As the digital revolution is quickly progressing, technologies have become widely available that overcome the limitations and offer to all the opportunity to appreciate models not only in 2D, but also in 3D. Additionally, mobile devices such as smartphones and tablets allow access to this information almost anywhere, at any time. Since access to such information has only recently become standard practice, we want to outline straightforward ways to incorporate 3D models in augmented reality into scientific publications, books, posters, and presentations and suggest that this should become general practice.

  6. Stereoscopic-3D display design: a new paradigm with Intel Adaptive Stable Image Technology [IA-SIT

    NASA Astrophysics Data System (ADS)

    Jain, Sunil

    2012-03-01

    Stereoscopic-3D (S3D) proliferation on personal computers (PC) is mired by several technical and business challenges: a) viewing discomfort due to cross-talk amongst stereo images; b) high system cost; and c) restricted content availability. Users expect S3D visual quality to be better than, or at least equal to, what they are used to enjoying on 2D in terms of resolution, pixel density, color, and interactivity. Intel Adaptive Stable Image Technology (IA-SIT) is a foundational technology, successfully developed to resolve S3D system design challenges and deliver high quality 3D visualization at PC price points. Optimizations in display driver, panel timing firmware, backlight hardware, eyewear optical stack, and synch mechanism combined can help accomplish this goal. Agnostic to refresh rate, IA-SIT will scale with shrinking of display transistors and improvements in liquid crystal and LED materials. Industry could profusely benefit from the following calls to action:- 1) Adopt 'IA-SIT S3D Mode' in panel specs (via VESA) to help panel makers monetize S3D; 2) Adopt 'IA-SIT Eyewear Universal Optical Stack' and algorithm (via CEA) to help PC peripheral makers develop stylish glasses; 3) Adopt 'IA-SIT Real Time Profile' for sub-100uS latency control (via BT Sig) to extend BT into S3D; and 4) Adopt 'IA-SIT Architecture' for Monitors and TVs to monetize via PC attach.

  7. Radiological assessment of breast density by visual classification (BI-RADS) compared to automated volumetric digital software (Quantra): implications for clinical practice.

    PubMed

    Regini, Elisa; Mariscotti, Giovanna; Durando, Manuela; Ghione, Gianluca; Luparia, Andrea; Campanino, Pier Paolo; Bianchi, Caterina Chiara; Bergamasco, Laura; Fonio, Paolo; Gandini, Giovanni

    2014-10-01

    This study was done to assess breast density on digital mammography and digital breast tomosynthesis according to the visual Breast Imaging Reporting and Data System (BI-RADS) classification, to compare visual assessment with Quantra software for automated density measurement, and to establish the role of the software in clinical practice. We analysed 200 digital mammograms performed in 2D and 3D modality, 100 of which positive for breast cancer and 100 negative. Radiological density was assessed with the BI-RADS classification; a Quantra density cut-off value was sought on the 2D images only to discriminate between BI-RADS categories 1-2 and BI-RADS 3-4. Breast density was correlated with age, use of hormone therapy, and increased risk of disease. The agreement between the 2D and 3D assessments of BI-RADS density was high (K 0.96). A cut-off value of 21% is that which allows us to best discriminate between BI-RADS categories 1-2 and 3-4. Breast density was negatively correlated to age (r = -0.44) and positively to use of hormone therapy (p = 0.0004). Quantra density was higher in breasts with cancer than in healthy breasts. There is no clear difference between the visual assessments of density on 2D and 3D images. Use of the automated system requires the adoption of a cut-off value (set at 21%) to effectively discriminate BI-RADS 1-2 and 3-4, and could be useful in clinical practice.

  8. Laboratory and in-flight experiments to evaluate 3-D audio display technology

    NASA Technical Reports Server (NTRS)

    Ericson, Mark; Mckinley, Richard; Kibbe, Marion; Francis, Daniel

    1994-01-01

    Laboratory and in-flight experiments were conducted to evaluate 3-D audio display technology for cockpit applications. A 3-D audio display generator was developed which digitally encodes naturally occurring direction information onto any audio signal and presents the binaural sound over headphones. The acoustic image is stabilized for head movement by use of an electromagnetic head-tracking device. In the laboratory, a 3-D audio display generator was used to spatially separate competing speech messages to improve the intelligibility of each message. Up to a 25 percent improvement in intelligibility was measured for spatially separated speech at high ambient noise levels (115 dB SPL). During the in-flight experiments, pilots reported that spatial separation of speech communications provided a noticeable improvement in intelligibility. The use of 3-D audio for target acquisition was also investigated. In the laboratory, 3-D audio enabled the acquisition of visual targets in about two seconds average response time at 17 degrees accuracy. During the in-flight experiments, pilots correctly identified ground targets 50, 75, and 100 percent of the time at separation angles of 12, 20, and 35 degrees, respectively. In general, pilot performance in the field with the 3-D audio display generator was as expected, based on data from laboratory experiments.

  9. Oscillatory network with self-organized dynamical connections for synchronization-based image segmentation.

    PubMed

    Kuzmina, Margarita; Manykin, Eduard; Surina, Irina

    2004-01-01

    An oscillatory network of columnar architecture located in 3D spatial lattice was recently designed by the authors as oscillatory model of the brain visual cortex. Single network oscillator is a relaxational neural oscillator with internal dynamics tunable by visual image characteristics - local brightness and elementary bar orientation. It is able to demonstrate either activity state (stable undamped oscillations) or "silence" (quickly damped oscillations). Self-organized nonlocal dynamical connections of oscillators depend on oscillator activity levels and orientations of cortical receptive fields. Network performance consists in transfer into a state of clusterized synchronization. At current stage grey-level image segmentation tasks are carried out by 2D oscillatory network, obtained as a limit version of the source model. Due to supplemented network coupling strength control the 2D reduced network provides synchronization-based image segmentation. New results on segmentation of brightness and texture images presented in the paper demonstrate accurate network performance and informative visualization of segmentation results, inherent in the model.

  10. Separating the Representation from the Science: Training Students in Comprehending 3D Diagrams

    NASA Astrophysics Data System (ADS)

    Bemis, K. G.; Silver, D.; Chiang, J.; Halpern, D.; Oh, K.; Tremaine, M.

    2011-12-01

    Studies of students taking first year geology and earth science courses at universities find that a remarkable number of them are confused by the three-dimensional representations used to explain the science [1]. Comprehension of these 3D representations has been found to be related to an individual's spatial ability [2]. A variety of interactive programs and animations have been created to help explain the diagrams to beginning students [3, 4]. This work has demonstrated comprehension improvement and removed a gender gap between male (high spatial) and female (low spatial) students [5]. However, not much research has examined what makes the 3D diagrams so hard to understand or attempted to build a theory for creating training designed to remove these difficulties. Our work has separated the science labeling and comprehension of the diagrams from the visualizations to examine how individuals mentally see the visualizations alone. In particular, we asked subjects to create a cross-sectional drawing of the internal structure of various 3D diagrams. We found that viewing planes (the coordinate system the designer applies to the diagram), cutting planes (the planes formed by the requested cross sections) and visual property planes (the planes formed by the prominent features of the diagram, e.g., a layer at an angle of 30 degrees to the top surface of the diagram) that deviated from a Cartesian coordinate system imposed by the viewer caused significant problems for subjects, in part because these deviations forced them to mentally re-orient their viewing perspective. Problems with deviations in all three types of plane were significantly harder than those deviating on one or two planes. Our results suggest training that does not focus on showing how the components of various 3D geologic formations are put together but rather training that guides students in re-orienting themselves to deviations that differ from their right-angle view of the world, e.g., by showing how a particular 3D visualization evolves from their Cartesian representation of the world. 1. Y. Kali and N. Orion, Spatial abilities of high-school students in the perception of geologic structures, Journal of Research in Science Teaching, 33, 4, 369-391, 1996. 2. A. Black, Spatial ability and earth science conceptual understanding, Journal of Geoscience Education, 53, 402-414, 2005 3. S. A. Sorby and B. J. Baartmans, The development and assessment of a course for enhancing the 3-D spatial visualization skills of first-year engineering students, Journal of Engineering Education Washington, 89, 301-308, 2000. 4. Y. Kali, N. Orion and E. Mazor, Software for assisting high-school students in the spatial perception of geological structures, Journal of Geoscience Education,45, 10-20, 1997. 5. D. Ben-Chaim. G. Lappan, and R. T. Houang, The effect of instruction on spatial visualization skills of middle school boys and girls, American Educational Research Journal, 25, 1, 51-71, 1988.

  11. On the use of orientation filters for 3D reconstruction in event-driven stereo vision

    PubMed Central

    Camuñas-Mesa, Luis A.; Serrano-Gotarredona, Teresa; Ieng, Sio H.; Benosman, Ryad B.; Linares-Barranco, Bernabe

    2014-01-01

    The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction. PMID:24744694

  12. Volumetric 3D Display System with Static Screen

    NASA Technical Reports Server (NTRS)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous approaches, so there is no image jitter, and has an inherent parallel mechanism for 3D voxel addressing. High spatial resolution is possible with a full color display being easy to implement. The system is low-cost and low-maintenance.

  13. Teaching Motion with the Global Positioning System

    ERIC Educational Resources Information Center

    Budisa, Marko; Planinsic, Gorazd

    2003-01-01

    We have used the GPS receiver and a PC interface to track different types of motion. Various hands-on experiments that enlighten the physics of motion at the secondary school level are suggested (visualization of 2D and 3D motion, measuring car drag coefficient and fuel consumption). (Contains 8 figures.)

  14. D Visualization of Mangrove and Aquaculture Conversion in Banate Bay, Iloilo

    NASA Astrophysics Data System (ADS)

    Domingo, G. A.; Mallillin, M. M.; Perez, A. M. C.; Claridades, A. R. C.; Tamondong, A. M.

    2017-10-01

    Studies have shown that mangrove forests in the Philippines have been drastically reduced due to conversion to fishponds, salt ponds, reclamation, as well as other forms of industrial development and as of 2011, Iloilo's 95 % mangrove forest was converted to fishponds. In this research, six (6) Landsat images acquired on the years 1973, 1976, 2000, 2006, 2010, and 2016, were classified using Support Vector Machine (SVM) Classification to determine land cover changes, particularly the area change of mangrove and aquaculture from 1976 to 2016. The results of the classification were used as layers for the generation of 3D visualization models using four (4) platforms namely Google Earth, ArcScene, Virtual Terrain Project, and Terragen. A perception survey was conducted among respondents with different levels of expertise in spatial analysis, 3D visualization, as well as in forestry, fisheries, and aquatic resources to assess the usability, effectiveness, and potential of the various platforms used. Change detection showed that largest negative change for mangrove areas happened from 1976 to 2000, with the mangrove area decreasing from 545.374 hectares to 286.935 hectares. Highest increase in fishpond area occurred from 1973 to 1976 rising from 2,930.67 hectares to 3,441.51 hectares. Results of the perception survey showed that ArcScene is preferred for spatial analysis while respondents favored Terragen for 3D visualization and for forestry, fishery and aquatic resources applications.

  15. You Can Touch This! Bringing HST images to life as 3-D models

    NASA Astrophysics Data System (ADS)

    Christian, Carol A.; Nota, A.; Grice, N. A.; Sabbi, E.; Shaheen, N.; Greenfield, P.; Hurst, A.; Kane, S.; Rao, R.; Dutterer, J.; de Mink, S. E.

    2014-01-01

    We present the very first results of an innovative process to transform Hubble images into tactile 3-D models of astronomical objects. We have created a very new, unique tool for understanding astronomical phenomena, especially designed to make astronomy accessible to visually impaired children and adults. From the multicolor images of stellar clusters, we construct 3-D computer models that are digitally sliced into layers, each featuring touchable patterning and Braille characters, and are printed on a 3-D printer. The slices are then fitted together, so that the user can explore the structure of the cluster environment with their fingertips, slice-by-slice, analogous to a visual fly-through. Students will be able to identify and spatially locate the different components of these complex astronomical objects, namely gas, dust and stars, and will learn about the formation and composition of stellar clusters. The primary audiences for the 3D models are middle school and high school blind students and, secondarily, blind adults. However, we believe that the final materials will address a broad range of individuals with varied and multi-sensory learning styles, and will be interesting and visually appealing to the public at large.

  16. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  17. Improving physics teaching materials on sound for visually impaired students in high school

    NASA Astrophysics Data System (ADS)

    Toenders, Frank G. C.; de Putter-Smits, Lesley G. A.; Sanders, Wendy T. M.; den Brok, Perry

    2017-09-01

    When visually impaired students attend regular high school, additional materials are necessary to help them understand physics concepts. The time for teachers to develop teaching materials for such students is scarce. Visually impaired students in regular high school physics classes often use a braille version of the physics textbook. Previously, we evaluated the physics learning environment of a blind high school student in a regular Dutch high school. In this research we evaluate the use of a revised braille textbook, relief drawings and 3D models. The research focussed on the topic of sound in grade 10.

  18. [Three-dimensional morphological modeling and visualization of wheat root system].

    PubMed

    Tan, Feng; Tang, Liang; Hu, Jun-Cheng; Jiang, Hai-Yan; Cao, Wei-Xing; Zhu, Yan

    2011-01-01

    Crop three-dimensional (3D) morphological modeling and visualization is an important part of digital plant study. This paper aimed to develop a 3D morphological model of wheat root system based on the parameters of wheat root morphological features, and to realize the visualization of wheat root growth. According to the framework of visualization technology for wheat root growth, a 3D visualization model of wheat root axis, including root axis growth model, branch geometric model, and root axis curve model, was developed firstly. Then, by integrating root topology, the corresponding pixel was determined, and the whole wheat root system was three-dimensionally re-constructed by using the morphological feature parameters in the root morphological model. Finally, based on the platform of OpenGL, and by integrating the technologies of texture mapping, lighting rendering, and collision detection, the 3D visualization of wheat root growth was realized. The 3D output of wheat root system from the model was vivid, which could realize the 3D root system visualization of different wheat cultivars under different water regimes and nitrogen application rates. This study could lay a technical foundation for further development of an integral visualization system of wheat plant.

  19. Quantitative visualization of synchronized insulin secretion from 3D-cultured cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Takahiro; Kanamori, Takao; Inouye, Satoshi

    Quantitative visualization of synchronized insulin secretion was performed in an isolated rat pancreatic islet and a spheroid of rat pancreatic beta cell line using a method of video-rate bioluminescence imaging. Video-rate images of insulin secretion from 3D-cultured cells were obtained by expressing the fusion protein of insulin and Gaussia luciferase (Insulin-GLase). A subclonal rat INS-1E cell line stably expressing Insulin-GLase, named iGL, was established and a cluster of iGL cells showed oscillatory insulin secretion that was completely synchronized in response to high glucose. Furthermore, we demonstrated the effect of an antidiabetic drug, glibenclamide, on synchronized insulin secretion from 2D- andmore » 3D-cultured iGL cells. The amount of secreted Insulin-GLase from iGL cells was also determined by a luminometer. Thus, our bioluminescence imaging method could generally be used for investigating protein secretion from living 3D-cultured cells. In addition, iGL cell line would be valuable for evaluating antidiabetic drugs. - Highlights: • An imaging method for protein secretion from 3D-cultured cells was established. • The fused protein of insulin to GLase, Insulin-GLase, was used as a reporter. • Synchronous insulin secretion was visualized in rat islets and spheroidal beta cells. • A rat beta cell line stably expressing Insulin-GLase, named iGL, was established. • Effect of an antidiabetic drug on insulin secretion was visualized in iGL cells.« less

  20. Young children's coding and storage of visual and verbal material.

    PubMed

    Perlmutter, M; Myers, N A

    1975-03-01

    36 preschool children (mean age 4.2 years) were each tested on 3 recognition memory lists differing in test mode (visual only, verbal only, combined visual-verbal). For one-third of the children, original list presentation was visual only, for another third, presentation was verbal only, and the final third received combined visual-verbal presentation. The subjects generally performed at a high level of correct responding. Verbal-only presentation resulted in less correct recognition than did either visual-only or combined visual-verbal presentation. However, because performances under both visual-only and combined visual-verbal presentation were statistically comparable, and a high level of spontaneous labeling was observed when items were presented only visually, a dual-processing conceptualization of memory in 4-year-olds was suggested.

  1. Accommodation in Astigmatic Children During Visual Task Performance

    PubMed Central

    Harvey, Erin M.; Miller, Joseph M.; Apple, Howard P.; Parashar, Pavan; Twelker, J. Daniel; Crescioni, Mabel; Davis, Amy L.; Leonard-Green, Tina K.; Campus, Irene; Sherrill, Duane L.

    2014-01-01

    Purpose. To determine the accuracy and stability of accommodation in uncorrected children during visual task performance. Methods. Subjects were second- to seventh-grade children from a highly astigmatic population. Measurements of noncycloplegic right eye spherical equivalent (Mnc) were obtained while uncorrected subjects performed three visual tasks at near (40 cm) and distance (2 m). Tasks included reading sentences with stimulus letter size near acuity threshold and an age-appropriate letter size (high task demands) and viewing a video (low task demand). Repeated measures ANOVA assessed the influence of astigmatism, task demand, and accommodative demand on accuracy (mean Mnc) and variability (mean SD of Mnc) of accommodation. Results. For near and distance analyses, respectively, sample size was 321 and 247, mean age was 10.37 (SD 1.77) and 10.30 (SD 1.74) years, mean cycloplegic M was 0.48 (SD 1.10) and 0.79 diopters (D) (SD 1.00), and mean astigmatism was 0.99 (SD 1.15) and 0.75 D (SD 0.96). Poor accommodative accuracy was associated with high astigmatism, low task demand (video viewing), and high accommodative demand. The negative effect of accommodative demand on accuracy increased with increasing astigmatism, with the poorest accommodative accuracy observed in high astigmats (≥3.00 D) with high accommodative demand/high hyperopia (1.53 D and 2.05 D of underaccommodation for near and distant stimuli, respectively). Accommodative variability was greatest in high astigmats and was uniformly high across task condition. No/low and moderate astigmats showed higher variability for the video task than the reading tasks. Conclusions. Accuracy of accommodation is reduced in uncorrected children with high astigmatism and high accommodative demand/high hyperopia, but improves with increased visual task demand (reading). High astigmats showed the greatest variability in accommodation. PMID:25103265

  2. Models of Speed Discrimination

    NASA Technical Reports Server (NTRS)

    1997-01-01

    The prime purpose of this project was to investigate various theoretical issues concerning the integration of information across visual space. To date, most of the research efforts in the study of the visual system seem to have been focused in two almost non-overlaping directions. One research focus has been the low level perception as studied by psychophysics. The other focus has been the study of high level vision exemplified by the study of object perception. Most of the effort in psychophysics has been devoted to the search for the fundamental "features" of perception. The general idea is that the most peripheral processes of the visual system decompose the input into features that are then used for classification and recognition. The experimental and theoretical focus has been on finding and describing these analyzers that decompose images into useful components. Various models are then compared to the physiological measurements performed on neurons in the sensory systems. In the study of higher level perception, the work has been focused on the representation of objects and on the connections between various physical effects and object perception. In this category we find the perception of 3D from a variety of physical measurements including motion, shading and other physical phenomena. With few exceptions, there seem to be very limited development of theories describing how the visual system might combine the output of the analyzers to form the representation of visual objects. Therefore, the processes underlying the integration of information over space represent critical aspects of vision system. The understanding of these processes will have implications on our expectations for the underlying physiological mechanisms, as well as for our models of the internal representation for visual percepts. In this project, we explored several mechanisms related to spatial summation, attention, and eye movements. The project comprised three components: 1. Modeling visual search for the detection of speed deviation. 2. Perception of moving objects. 3. Exploring the role of eye movements in various visual tasks.

  3. Experimental investigations into visual and electronic tooth color measurement.

    PubMed

    Ratzmann, Anja; Treichel, Anja; Langforth, Gabriele; Gedrange, Tomasz; Welk, Alexander

    2011-04-01

    The present study aimed to examine the validity of the visual color assessment and an electronic tooth color measurement system by means of Shade Inspector™ in comparison with a gold standard. Additionally, reproducibility of electronic measurements was demonstrated by means of two reference systems. Ceramic specimens of two thicknesses (h=1.6 mm, h=2.6 mm) were used. Three experienced dental technicians using the VITAPAN Classical(®) color scale carried out all visual tests. Validity of the visual assessment and the electronic measurements was confirmed separately for both thicknesses by means of lightness and hue of the VITAPAN Classical(®) color scale. Reproducibility of electronic measurements was confirmed by means of the VITAPAN Classical(®) and 3D-Master(®). The 3D-Master(®) data were calculated according to lightness, hue and chroma. Intraclass correlation coefficient (ICC) was used in assessing validity/reproducibility for lightness and chroma, Kappa statistics were used for hue. A level ≥0.75 was pre-established for ICC and ≥0.60 for the Kappa index. RESULTS OF VISUAL COLOR ASSESSMENT: Validity for lightness was good for both thicknesses; agreement rates for hue were inconsistent. ELECTRONIC MEASUREMENT: Validity for lightness was fair to good, hue values were below 0.60. Reproducibility of lightness was good to very good for both reference systems. Hue values (VITAPAN Classical(®)) for 1.6 mm test specimens were upside, for 2.6 mm below 0.60, Kappa values for 3D-Master(®) were ≥0.60 for all measurements, reproducibility of chroma was very good. Validity was better for visual than for electronic color assessment. Reproducibility of the electronic device by means of the Shade Inspector™ was given for the VITAPAN Classical(®) and 3D-Master(®) systems.

  4. Short Term Reproducibility of a High Contrast 3-D Isotropic Optic Nerve Imaging Sequence in Healthy Controls.

    PubMed

    Harrigan, Robert L; Smith, Alex K; Mawn, Louise A; Smith, Seth A; Landman, Bennett A

    2016-02-27

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short-term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  5. Short term reproducibility of a high contrast 3-D isotropic optic nerve imaging sequence in healthy controls

    NASA Astrophysics Data System (ADS)

    Harrigan, Robert L.; Smith, Alex K.; Mawn, Louise A.; Smith, Seth A.; Landman, Bennett A.

    2016-03-01

    The optic nerve (ON) plays a crucial role in human vision transporting all visual information from the retina to the brain for higher order processing. There are many diseases that affect the ON structure such as optic neuritis, anterior ischemic optic neuropathy and multiple sclerosis. Because the ON is the sole pathway for visual information from the retina to areas of higher level processing, measures of ON damage have been shown to correlate well with visual deficits. Increased intracranial pressure has been shown to correlate with the size of the cerebrospinal fluid (CSF) surrounding the ON. These measures are generally taken at an arbitrary point along the nerve and do not account for changes along the length of the ON. We propose a high contrast and high-resolution 3-D acquired isotropic imaging sequence optimized for ON imaging. We have acquired scan-rescan data using the optimized sequence and a current standard of care protocol for 10 subjects. We show that this sequence has superior contrast-to-noise ratio to the current standard of care while achieving a factor of 11 higher resolution. We apply a previously published automatic pipeline to segment the ON and CSF sheath and measure the size of each individually. We show that these measures of ON size have lower short- term reproducibility than the population variance and the variability along the length of the nerve. We find that the proposed imaging protocol is (1) useful in detecting population differences and local changes and (2) a promising tool for investigating biomarkers related to structural changes of the ON.

  6. An integrative view of storage of low- and high-level visual dimensions in visual short-term memory.

    PubMed

    Magen, Hagit

    2017-03-01

    Efficient performance in an environment filled with complex objects is often achieved through the temporal maintenance of conjunctions of features from multiple dimensions. The most striking finding in the study of binding in visual short-term memory (VSTM) is equal memory performance for single features and for integrated multi-feature objects, a finding that has been central to several theories of VSTM. Nevertheless, research on binding in VSTM focused almost exclusively on low-level features, and little is known about how items from low- and high-level visual dimensions (e.g., colored manmade objects) are maintained simultaneously in VSTM. The present study tested memory for combinations of low-level features and high-level representations. In agreement with previous findings, Experiments 1 and 2 showed decrements in memory performance when non-integrated low- and high-level stimuli were maintained simultaneously compared to maintaining each dimension in isolation. However, contrary to previous findings the results of Experiments 3 and 4 showed decrements in memory performance even when integrated objects of low- and high-level stimuli were maintained in memory, compared to maintaining single-dimension objects. Overall, the results demonstrate that low- and high-level visual dimensions compete for the same limited memory capacity, and offer a more comprehensive view of VSTM.

  7. ePlant: Visualizing and Exploring Multiple Levels of Data for Hypothesis Generation in Plant Biology[OPEN

    PubMed Central

    Waese, Jamie; Fan, Jim; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Town, Chris; Stuerzlinger, Wolfgang

    2017-01-01

    A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an “app” on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. PMID:28808136

  8. Educating the blind brain: a panorama of neural bases of vision and of training programs in organic neurovisual deficits

    PubMed Central

    Coubard, Olivier A.; Urbanski, Marika; Bourlon, Clémence; Gaumet, Marie

    2014-01-01

    Vision is a complex function, which is achieved by movements of the eyes to properly foveate targets at any location in 3D space and to continuously refresh neural information in the different visual pathways. The visual system involves five main routes originating in the retinas but varying in their destination within the brain: the occipital cortex, but also the superior colliculus (SC), the pretectum, the supra-chiasmatic nucleus, the nucleus of the optic tract and terminal dorsal, medial and lateral nuclei. Visual pathway architecture obeys systematization in sagittal and transversal planes so that visual information from left/right and upper/lower hemi-retinas, corresponding respectively to right/left and lower/upper visual fields, is processed ipsilaterally and ipsialtitudinally to hemi-retinas in left/right hemispheres and upper/lower fibers. Organic neurovisual deficits may occur at any level of this circuitry from the optic nerve to subcortical and cortical destinations, resulting in low or high-level visual deficits. In this didactic review article, we provide a panorama of the neural bases of eye movements and visual systems, and of related neurovisual deficits. Additionally, we briefly review the different schools of rehabilitation of organic neurovisual deficits, and show that whatever the emphasis is put on action or perception, benefits may be observed at both motor and perceptual levels. Given the extent of its neural bases in the brain, vision in its motor and perceptual aspects is also a useful tool to assess and modulate central nervous system (CNS) in general. PMID:25538575

  9. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  10. Geometric quantification of features in large flow fields.

    PubMed

    Kendall, Wesley; Huang, Jian; Peterka, Tom

    2012-01-01

    Interactive exploration of flow features in large-scale 3D unsteady-flow data is one of the most challenging visualization problems today. To comprehensively explore the complex feature spaces in these datasets, a proposed system employs a scalable framework for investigating a multitude of characteristics from traced field lines. This capability supports the examination of various neighborhood-based geometric attributes in concert with other scalar quantities. Such an analysis wasn't previously possible because of the large computational overhead and I/O requirements. The system integrates visual analytics methods by letting users procedurally and interactively describe and extract high-level flow features. An exploration of various phenomena in a large global ocean-modeling simulation demonstrates the approach's generality and expressiveness as well as its efficacy.

  11. A 3D Model Based Imdoor Navigation System for Hubei Provincial Museum

    NASA Astrophysics Data System (ADS)

    Xu, W.; Kruminaite, M.; Onrust, B.; Liu, H.; Xiong, Q.; Zlatanova, S.

    2013-11-01

    3D models are more powerful than 2D maps for indoor navigation in a complicate space like Hubei Provincial Museum because they can provide accurate descriptions of locations of indoor objects (e.g., doors, windows, tables) and context information of these objects. In addition, the 3D model is the preferred navigation environment by the user according to the survey. Therefore a 3D model based indoor navigation system is developed for Hubei Provincial Museum to guide the visitors of museum. The system consists of three layers: application, web service and navigation, which is built to support localization, navigation and visualization functions of the system. There are three main strengths of this system: it stores all data needed in one database and processes most calculations on the webserver which make the mobile client very lightweight, the network used for navigation is extracted semi-automatically and renewable, the graphic user interface (GUI), which is based on a game engine, has high performance of visualizing 3D model on a mobile display.

  12. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE PAGES

    Steed, Chad A.; Halsey, William; Dehoff, Ryan; ...

    2017-02-16

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  13. Falcon: Visual analysis of large, irregularly sampled, and multivariate time series data in additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A.; Halsey, William; Dehoff, Ryan

    Flexible visual analysis of long, high-resolution, and irregularly sampled time series data from multiple sensor streams is a challenge in several domains. In the field of additive manufacturing, this capability is critical for realizing the full potential of large-scale 3D printers. Here, we propose a visual analytics approach that helps additive manufacturing researchers acquire a deep understanding of patterns in log and imagery data collected by 3D printers. Our specific goals include discovering patterns related to defects and system performance issues, optimizing build configurations to avoid defects, and increasing production efficiency. We introduce Falcon, a new visual analytics system thatmore » allows users to interactively explore large, time-oriented data sets from multiple linked perspectives. Falcon provides overviews, detailed views, and unique segmented time series visualizations, all with adjustable scale options. To illustrate the effectiveness of Falcon at providing thorough and efficient knowledge discovery, we present a practical case study involving experts in additive manufacturing and data from a large-scale 3D printer. The techniques described are applicable to the analysis of any quantitative time series, though the focus of this paper is on additive manufacturing.« less

  14. Cancer-Related Analysis of Variants Toolkit (CRAVAT) | Informatics Technology for Cancer Research (ITCR)

    Cancer.gov

    CRAVAT is an easy to use web-based tool for analysis of cancer variants (missense, nonsense, in-frame indel, frameshift indel, splice site). CRAVAT provides scores and a variety of annotations that assist in identification of important variants. Results are provided in an interactive, highly graphical webpage and include annotated 3D structure visualization. CRAVAT is also available for local or cloud-based installation as a Docker container. MuPIT provides 3D visualization of mutation clusters and functional annotation and is now integrated with CRAVAT.

  15. Exploring Direct 3D Interaction for Full Horizontal Parallax Light Field Displays Using Leap Motion Controller

    PubMed Central

    Adhikarla, Vamsi Kiran; Sodnik, Jaka; Szolgay, Peter; Jakus, Grega

    2015-01-01

    This paper reports on the design and evaluation of direct 3D gesture interaction with a full horizontal parallax light field display. A light field display defines a visual scene using directional light beams emitted from multiple light sources as if they are emitted from scene points. Each scene point is rendered individually resulting in more realistic and accurate 3D visualization compared to other 3D displaying technologies. We propose an interaction setup combining the visualization of objects within the Field Of View (FOV) of a light field display and their selection through freehand gesture tracked by the Leap Motion Controller. The accuracy and usefulness of the proposed interaction setup was also evaluated in a user study with test subjects. The results of the study revealed high user preference for free hand interaction with light field display as well as relatively low cognitive demand of this technique. Further, our results also revealed some limitations and adjustments of the proposed setup to be addressed in future work. PMID:25875189

  16. High-immersion three-dimensional display of the numerical computer model

    NASA Astrophysics Data System (ADS)

    Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu

    2013-08-01

    High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.

  17. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  18. See-Through Imaging of Laser-Scanned 3d Cultural Heritage Objects Based on Stochastic Rendering of Large-Scale Point Clouds

    NASA Astrophysics Data System (ADS)

    Tanaka, S.; Hasegawa, K.; Okamoto, N.; Umegaki, R.; Wang, S.; Uemura, M.; Okamoto, A.; Koyamada, K.

    2016-06-01

    We propose a method for the precise 3D see-through imaging, or transparent visualization, of the large-scale and complex point clouds acquired via the laser scanning of 3D cultural heritage objects. Our method is based on a stochastic algorithm and directly uses the 3D points, which are acquired using a laser scanner, as the rendering primitives. This method achieves the correct depth feel without requiring depth sorting of the rendering primitives along the line of sight. Eliminating this need allows us to avoid long computation times when creating natural and precise 3D see-through views of laser-scanned cultural heritage objects. The opacity of each laser-scanned object is also flexibly controllable. For a laser-scanned point cloud consisting of more than 107 or 108 3D points, the pre-processing requires only a few minutes, and the rendering can be executed at interactive frame rates. Our method enables the creation of cumulative 3D see-through images of time-series laser-scanned data. It also offers the possibility of fused visualization for observing a laser-scanned object behind a transparent high-quality photographic image placed in the 3D scene. We demonstrate the effectiveness of our method by applying it to festival floats of high cultural value. These festival floats have complex outer and inner 3D structures and are suitable for see-through imaging.

  19. A method to determine the impact of reduced visual function on nodule detection performance.

    PubMed

    Thompson, J D; Lança, C; Lança, L; Hogg, P

    2017-02-01

    In this study we aim to validate a method to assess the impact of reduced visual function and observer performance concurrently with a nodule detection task. Three consultant radiologists completed a nodule detection task under three conditions: without visual defocus (0.00 Dioptres; D), and with two different magnitudes of visual defocus (-1.00 D and -2.00 D). Defocus was applied with lenses and visual function was assessed prior to each image evaluation. Observers evaluated the same cases on each occasion; this comprised of 50 abnormal cases containing 1-4 simulated nodules (5, 8, 10 and 12 mm spherical diameter, 100 HU) placed within a phantom, and 25 normal cases (images containing no nodules). Data was collected under the free-response paradigm and analysed using Rjafroc. A difference in nodule detection performance would be considered significant at p < 0.05. All observers had acceptable visual function prior to beginning the nodule detection task. Visual acuity was reduced to an unacceptable level for two observers when defocussed to -1.00 D and for one observer when defocussed to -2.00 D. Stereoacuity was unacceptable for one observer when defocussed to -2.00 D. Despite unsatisfactory visual function in the presence of defocus we were unable to find a statistically significant difference in nodule detection performance (F(2,4) = 3.55, p = 0.130). A method to assess visual function and observer performance is proposed. In this pilot evaluation we were unable to detect any difference in nodule detection performance when using lenses to reduce visual function. Copyright © 2016 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  20. Convergence Insufficiency, Accommodative Insufficiency, Visual Symptoms, and Astigmatism in Tohono O'odham Students

    PubMed Central

    Twelker, J. Daniel; Miller, Joseph M.; Campus, Irene

    2016-01-01

    Purpose. To determine rate of convergence insufficiency (CI) and accommodative insufficiency (AI) and assess the relation between CI, AI, visual symptoms, and astigmatism in school-age children. Methods. 3rd–8th-grade students completed the Convergence Insufficiency Symptom Survey (CISS) and binocular vision testing with correction if prescribed. Students were categorized by astigmatism magnitude (no/low: <1.00 D, moderate: 1.00 D to <3.00 D, and high: ≥3.00 D), presence/absence of clinical signs of CI and AI, and presence of symptoms. Analyses determine rate of clinical CI and AI and symptomatic CI and AI and assessed the relation between CI, AI, visual symptoms, and astigmatism. Results. In the sample of 484 students (11.67 ± 1.81 years of age), rate of symptomatic CI was 6.2% and symptomatic AI 18.2%. AI was more common in students with CI than without CI. Students with AI only (p = 0.02) and with CI and AI (p = 0.001) had higher symptom scores than students with neither CI nor AI. Moderate and high astigmats were not at increased risk for CI or AI. Conclusions. With-the-rule astigmats are not at increased risk for CI or AI. High comorbidity rates of CI and AI and higher symptoms scores with AI suggest that research is needed to determine symptomatology specific to CI. PMID:27525112

  1. Openwebglobe 2: Visualization of Complex 3D-GEODATA in the (mobile) Webbrowser

    NASA Astrophysics Data System (ADS)

    Christen, M.

    2016-06-01

    Providing worldwide high resolution data for virtual globes consists of compute and storage intense tasks for processing data. Furthermore, rendering complex 3D-Geodata, such as 3D-City models with an extremely high polygon count and a vast amount of textures at interactive framerates is still a very challenging task, especially on mobile devices. This paper presents an approach for processing, caching and serving massive geospatial data in a cloud-based environment for large scale, out-of-core, highly scalable 3D scene rendering on a web based virtual globe. Cloud computing is used for processing large amounts of geospatial data and also for providing 2D and 3D map data to a large amount of (mobile) web clients. In this paper the approach for processing, rendering and caching very large datasets in the currently developed virtual globe "OpenWebGlobe 2" is shown, which displays 3D-Geodata on nearly every device.

  2. In situ Compressive Loading and Correlative Noninvasive Imaging of the Bone-periodontal Ligament-tooth Fibrous Joint

    PubMed Central

    Jang, Andrew T.; Lin, Jeremy D.; Seo, Youngho; Etchin, Sergey; Merkle, Arno; Fahey, Kevin; Ho, Sunita P.

    2014-01-01

    This study demonstrates a novel biomechanics testing protocol. The advantage of this protocol includes the use of an in situ loading device coupled to a high resolution X-ray microscope, thus enabling visualization of internal structural elements under simulated physiological loads and wet conditions. Experimental specimens will include intact bone-periodontal ligament (PDL)-tooth fibrous joints. Results will illustrate three important features of the protocol as they can be applied to organ level biomechanics: 1) reactionary force vs. displacement: tooth displacement within the alveolar socket and its reactionary response to loading, 2) three-dimensional (3D) spatial configuration and morphometrics: geometric relationship of the tooth with the alveolar socket, and 3) changes in readouts 1 and 2 due to a change in loading axis, i.e. from concentric to eccentric loads. Efficacy of the proposed protocol will be evaluated by coupling mechanical testing readouts to 3D morphometrics and overall biomechanics of the joint. In addition, this technique will emphasize on the need to equilibrate experimental conditions, specifically reactionary loads prior to acquiring tomograms of fibrous joints. It should be noted that the proposed protocol is limited to testing specimens under ex vivo conditions, and that use of contrast agents to visualize soft tissue mechanical response could lead to erroneous conclusions about tissue and organ-level biomechanics. PMID:24638035

  3. Visualizing Sea Level Rise with Augmented Reality

    NASA Astrophysics Data System (ADS)

    Kintisch, E. S.

    2013-12-01

    Looking Glass is an application on the iPhone that visualizes in 3-D future scenarios of sea level rise, overlaid on live camera imagery in situ. Using a technology known as augmented reality, the app allows a layperson user to explore various scenarios of sea level rise using a visual interface. Then the user can see, in an immersive, dynamic way, how those scenarios would affect a real place. The first part of the experience activates users' cognitive, quantitative thinking process, teaching them how global sea level rise, tides and storm surge contribute to flooding; the second allows an emotional response to a striking visual depiction of possible future catastrophe. This project represents a partnership between a science journalist, MIT, and the Rhode Island School of Design, and the talk will touch on lessons this projects provides on structuring and executing such multidisciplinary efforts on future design projects.

  4. Mathematical visualization process of junior high school students in solving a contextual problem based on cognitive style

    NASA Astrophysics Data System (ADS)

    Utomo, Edy Setiyo; Juniati, Dwi; Siswono, Tatag Yuli Eko

    2017-08-01

    The aim of this research was to describe the mathematical visualization process of Junior High School students in solving contextual problems based on cognitive style. Mathematical visualization process in this research was seen from aspects of image generation, image inspection, image scanning, and image transformation. The research subject was the students in the eighth grade based on GEFT test (Group Embedded Figures Test) adopted from Within to determining the category of cognitive style owned by the students namely field independent or field dependent and communicative. The data collection was through visualization test in contextual problem and interview. The validity was seen through time triangulation. The data analysis referred to the aspect of mathematical visualization through steps of categorization, reduction, discussion, and conclusion. The results showed that field-independent and field-dependent subjects were difference in responding to contextual problems. The field-independent subject presented in the form of 2D and 3D, while the field-dependent subject presented in the form of 3D. Both of the subjects had different perception to see the swimming pool. The field-independent subject saw from the top, while the field-dependent subject from the side. The field-independent subject chose to use partition-object strategy, while the field-dependent subject chose to use general-object strategy. Both the subjects did transformation in an object rotation to get the solution. This research is reference to mathematical curriculum developers of Junior High School in Indonesia. Besides, teacher could develop the students' mathematical visualization by using technology media or software, such as geogebra, portable cabri in learning.

  5. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  6. Synthesizing 3D Surfaces from Parameterized Strip Charts

    NASA Technical Reports Server (NTRS)

    Robinson, Peter I.; Gomez, Julian; Morehouse, Michael; Gawdiak, Yuri

    2004-01-01

    We believe 3D information visualization has the power to unlock new levels of productivity in the monitoring and control of complex processes. Our goal is to provide visual methods to allow for rapid human insight into systems consisting of thousands to millions of parameters. We explore this hypothesis in two complex domains: NASA program management and NASA International Space Station (ISS) spacecraft computer operations. We seek to extend a common form of visualization called the strip chart from 2D to 3D. A strip chart can display the time series progression of a parameter and allows for trends and events to be identified. Strip charts can be overlayed when multiple parameters need to visualized in order to correlate their events. When many parameters are involved, the direct overlaying of strip charts can become confusing and may not fully utilize the graphing area to convey the relationships between the parameters. We provide a solution to this problem by generating 3D surfaces from parameterized strip charts. The 3D surface utilizes significantly more screen area to illustrate the differences in the parameters and the overlayed strip charts, and it can rapidly be scanned by humans to gain insight. The selection of the third dimension must be a parallel or parameterized homogenous resource in the target domain, defined using a finite, ordered, enumerated type, and not a heterogeneous type. We demonstrate our concepts with examples from the NASA program management domain (assessing the state of many plans) and the computers of the ISS (assessing the state of many computers). We identify 2D strip charts in each domain and show how to construct the corresponding 3D surfaces. The user can navigate the surface, zooming in on regions of interest, setting a mark and drilling down to source documents from which the data points have been derived. We close by discussing design issues, related work, and implementation challenges.

  7. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    PubMed Central

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  8. Cube search, revisited.

    PubMed

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-03-16

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with "equivalent" 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. © 2015 ARVO.

  9. Cube search, revisited

    PubMed Central

    Zhang, Xuetao; Huang, Jie; Yigit-Elliott, Serap; Rosenholtz, Ruth

    2015-01-01

    Observers can quickly search among shaded cubes for one lit from a unique direction. However, replace the cubes with similar 2-D patterns that do not appear to have a 3-D shape, and search difficulty increases. These results have challenged models of visual search and attention. We demonstrate that cube search displays differ from those with “equivalent” 2-D search items in terms of the informativeness of fairly low-level image statistics. This informativeness predicts peripheral discriminability of target-present from target-absent patches, which in turn predicts visual search performance, across a wide range of conditions. Comparing model performance on a number of classic search tasks, cube search does not appear unexpectedly easy. Easy cube search, per se, does not provide evidence for preattentive computation of 3-D scene properties. However, search asymmetries derived from rotating and/or flipping the cube search displays cannot be explained by the information in our current set of image statistics. This may merely suggest a need to modify the model's set of 2-D image statistics. Alternatively, it may be difficult cube search that provides evidence for preattentive computation of 3-D scene properties. By attributing 2-D luminance variations to a shaded 3-D shape, 3-D scene understanding may slow search for 2-D features of the target. PMID:25780063

  10. Clinical evaluation of a new pupil independent diffractive multifocal intraocular lens with a +2.75 D near addition: a European multicentre study.

    PubMed

    Kretz, Florian T A; Gerl, Matthias; Gerl, Ralf; Müller, Matthias; Auffarth, Gerd U

    2015-12-01

    To evaluate the clinical outcomes after cataract surgery with implantation of a new diffractive multifocal intraocular lens (IOL) with a lower near addition (+2.75 D.). 143 eyes of 85 patients aged between 40 years and 83 years that underwent cataract surgery with implantation of the multifocal IOL (MIOL) Tecnis ZKB00 (Abbott Medical Optics,Santa Ana, California, USA) were evaluated. Changes in uncorrected (uncorrected distance visual acuity, uncorrected intermediate visual acuity, uncorrected near visual acuity) and corrected (corrected distance visual acuity, corrected near visual acuity) logMAR distance, intermediate visual acuity and near visual acuity, as well as manifest refraction were evaluated during a 3-month follow-up. Additionally, patients were asked about photic phenomena and spectacle dependence. Postoperative spherical equivalent was within ±0.50 D and ±1.00 D of emmetropia in 78.1% and 98.4% of eyes, respectively. Postoperative mean monocular uncorrected distance visual acuity, uncorrected near visual acuity and uncorrected intermediate visual acuity was 0.20 LogMAR or better in 73.7%, 81.1% and 83.9% of eyes, respectively. All eyes achieved monocular corrected distance visual acuity of 0.30 LogMAR or better. A total of 100% of patients referred to be at least moderately happy with the outcomes of the surgery. Only 15.3% of patients required the use of spectacles for some daily activities postoperatively. The introduction of low add MIOLs follows a trend to increase intermediate visual acuity. In this study a near add of +2.75 D still reaches satisfying near results and leads to high patient satisfaction for intermediate visual acuity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  11. The role of three-dimensional visualization in robotics-assisted cardiac surgery

    NASA Astrophysics Data System (ADS)

    Currie, Maria; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W. A.; Patel, Rajni; Peters, Terry; Kiaii, Bob

    2012-02-01

    Objectives: The purpose of this study was to determine the effect of three-dimensional (3D) versus two-dimensional (2D) visualization on the amount of force applied to mitral valve tissue during robotics-assisted mitral valve annuloplasty, and the time to perform the procedure in an ex vivo animal model. In addition, we examined whether these effects are consistent between novices and experts in robotics-assisted cardiac surgery. Methods: A cardiac surgery test-bed was constructed to measure forces applied by the da Vinci surgical system (Intuitive Surgical, Sunnyvale, CA) during mitral valve annuloplasty. Both experts and novices completed roboticsassisted mitral valve annuloplasty with 2D and 3D visualization. Results: The mean time for both experts and novices to suture the mitral valve annulus and to tie sutures using 3D visualization was significantly less than that required to suture the mitral valve annulus and to tie sutures using 2D vision (p∠0.01). However, there was no significant difference in the maximum force applied by novices to the mitral valve during suturing (p = 0.3) and suture tying (p = 0.6) using either 2D or 3D visualization. Conclusion: This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery. Keywords: Robotics-assisted surgery, visualization, cardiac surgery

  12. Using higher-level inquiry to improve spatial ability in an introductory geology course

    NASA Astrophysics Data System (ADS)

    Stevens, Lacey A.

    Visuo-spatial skills, the ability to visually take in information and create a mental image are crucial for success in fields involving science, technology, engineering, and math (STEM) as well as fine arts. Unfortunately, due to a lack of curriculum focused on developing spatial skills, students enrolled in introductory college-level science courses tend to have difficulty with spatially-related activities. One of the best ways to engage students in science activities is through a learning and teaching strategy called inquiry. There are lower levels of inquiry wherein learning and problem-solving are guided by instructions and higher levels of inquiry wherein students have a greater degree of autonomy in learning and creating their own problem-solving strategy. A study involving 112 participants was conducted during the fall semester in 2014 at Bowling Green State University (BGSU) in an 1040 Introductory Geology Lab to determine if a new, high-level, inquiry-based lab would increase participants' spatial skills more than the traditional, low-level inquiry lab. The study also evaluated whether a higher level of inquiry differentially affected low versus high spatial ability participants. Participants were evaluated using a spatial ability assessment, and pre- and post-tests. The results of this study show that for 3-D to 2-D visualization, the higher-level inquiry lab increased participants' spatial ability more than the lower-level inquiry lab. For spatial rotational skills, all participants' spatial ability scores improved, regardless of the level of inquiry to which they were exposed. Low and high spatial ability participants were not differentially affected. This study demonstrates that a lab designed with a higher level of inquiry can increase students' spatial ability more than a lab with a low level of inquiry. A lab with a higher level of inquiry helped all participants, regardless of their initial spatial ability level. These findings show that curriculum that incorporates a high level of inquiry that integrates practice of spatial skills can increase students' spatial abilities in Geology-related coursework.

  13. Workflows and performances in the ranking prediction of 2016 D3R Grand Challenge 2: lessons learned from a collaborative effort.

    PubMed

    Gao, Ying-Duo; Hu, Yuan; Crespo, Alejandro; Wang, Deping; Armacost, Kira A; Fells, James I; Fradera, Xavier; Wang, Hongwu; Wang, Huijun; Sherborne, Brad; Verras, Andreas; Peng, Zhengwei

    2018-01-01

    The 2016 D3R Grand Challenge 2 includes both pose and affinity or ranking predictions. This article is focused exclusively on affinity predictions submitted to the D3R challenge from a collaborative effort of the modeling and informatics group. Our submissions include ranking of 102 ligands covering 4 different chemotypes against the FXR ligand binding domain structure, and the relative binding affinity predictions of the two designated free energy subsets of 15 and 18 compounds. Using all the complex structures prepared in the same way allowed us to cover many types of workflows and compare their performances effectively. We evaluated typical workflows used in our daily structure-based design modeling support, which include docking scores, force field-based scores, QM/MM, MMGBSA, MD-MMGBSA, and MacroModel interaction energy estimations. The best performing methods for the two free energy subsets are discussed. Our results suggest that affinity ranking still remains very challenging; that the knowledge of more structural information does not necessarily yield more accurate predictions; and that visual inspection and human intervention are considerably important for ranking. Knowledge of the mode of action and protein flexibility along with visualization tools that depict polar and hydrophobic maps are very useful for visual inspection. QM/MM-based workflows were found to be powerful in affinity ranking and are encouraged to be applied more often. The standardized input and output enable systematic analysis and support methodology development and improvement for high level blinded predictions.

  14. Workflows and performances in the ranking prediction of 2016 D3R Grand Challenge 2: lessons learned from a collaborative effort

    NASA Astrophysics Data System (ADS)

    Gao, Ying-Duo; Hu, Yuan; Crespo, Alejandro; Wang, Deping; Armacost, Kira A.; Fells, James I.; Fradera, Xavier; Wang, Hongwu; Wang, Huijun; Sherborne, Brad; Verras, Andreas; Peng, Zhengwei

    2018-01-01

    The 2016 D3R Grand Challenge 2 includes both pose and affinity or ranking predictions. This article is focused exclusively on affinity predictions submitted to the D3R challenge from a collaborative effort of the modeling and informatics group. Our submissions include ranking of 102 ligands covering 4 different chemotypes against the FXR ligand binding domain structure, and the relative binding affinity predictions of the two designated free energy subsets of 15 and 18 compounds. Using all the complex structures prepared in the same way allowed us to cover many types of workflows and compare their performances effectively. We evaluated typical workflows used in our daily structure-based design modeling support, which include docking scores, force field-based scores, QM/MM, MMGBSA, MD-MMGBSA, and MacroModel interaction energy estimations. The best performing methods for the two free energy subsets are discussed. Our results suggest that affinity ranking still remains very challenging; that the knowledge of more structural information does not necessarily yield more accurate predictions; and that visual inspection and human intervention are considerably important for ranking. Knowledge of the mode of action and protein flexibility along with visualization tools that depict polar and hydrophobic maps are very useful for visual inspection. QM/MM-based workflows were found to be powerful in affinity ranking and are encouraged to be applied more often. The standardized input and output enable systematic analysis and support methodology development and improvement for high level blinded predictions.

  15. Evaluating the Use of Cleft Lip and Palate 3D-Printed Models as a Teaching Aid.

    PubMed

    AlAli, Ahmad B; Griffin, Michelle F; Calonge, Wenceslao M; Butler, Peter E

    Visualization tools are essential for effective medical education, to aid students understanding of complex anatomical systems. Three dimensional (3D) printed models are showing a wide-reaching potential in the field of medical education, to aid the interpretation of 2D imaging. This study investigates the use of 3D-printed models in educational seminars on cleft lip and palate, by comparing integrated "hands-on" student seminars, with 2D presentation seminar methods. Cleft lip and palate models were manufactured using 3D-printing technology at the medical school. Sixty-seven students from two medical schools participated in the study. The students were randomly allocated to 2 groups. Knowledge was compared between the groups using a multiple-choice question test before and after the teaching intervention. Group 1 was the control group with a PowerPoint presentation-based educational seminar and group 2 was the test group, with the same PowerPoint presentation, but with the addition of a physical demonstration using 3D-printed models of unilateral and bilateral cleft lips and palate. The level of knowledge gained was established using a preseminar and postseminar assessment, in 2 different institutions, where the addition of the 3D-printed model resulted in a significant improvement in the mean percentage of knowledge gained (44.65% test group; 32.16%; control group; p = 0.038). Student experience was assessed using a postseminar survey, where students felt the 3D-printed model significantly improved the learning experience (p = 0.005) and their visualization (p = 0.001). This study highlights the benefits of the use of 3D-printed models as visualization tools in medical education and the potential of 3D-printing technology to become a standard and effective tool in the interpretation of 2D imaging. Copyright © 2018 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  16. Detailed 3D representations for object recognition and modeling.

    PubMed

    Zia, M Zeeshan; Stark, Michael; Schiele, Bernt; Schindler, Konrad

    2013-11-01

    Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching.

  17. Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.

    PubMed

    Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce

    2017-10-01

    Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.

  18. Three-dimensional versus two-dimensional ultrasound for assessing levonorgestrel intrauterine device location: A pilot study.

    PubMed

    Andrade, Carla Maria Araujo; Araujo Júnior, Edward; Torloni, Maria Regina; Moron, Antonio Fernandes; Guazzelli, Cristina Aparecida Falbo

    2016-02-01

    To compare the rates of success of two-dimensional (2D) and three-dimensional (3D) sonographic (US) examinations in locating and adequately visualizing levonorgestrel intrauterine devices (IUDs) and to explore factors associated with the unsuccessful viewing on 2D US. Transvaginal 2D and 3D US examinations were performed on all patients 1 month after insertion of levonorgestrel IUDs. The devices were considered adequately visualized on 2D US if both the vertical (shadow, upper and lower extremities) and the horizontal (two echogenic lines) shafts were identified. 3D volumes were also captured to assess the location of levonorgestrel IUDs on 3D US. Thirty women were included. The rates of adequate device visualization were 40% on 2D US (95% confidence interval [CI], 24.6; 57.7) and 100% on 3D US (95% CI, 88.6; 100.0). The device was not adequately visualized in all six women who had a retroflexed uterus, but it was adequately visualized in 12 of the 24 women (50%) who had a nonretroflexed uterus (95% CI, -68.6; -6.8). We found that 3D US is better than 2D US for locating and adequately visualizing levonorgestrel IUDs. Other well-designed studies with adequate power should be conducted to confirm this finding. © 2015 Wiley Periodicals, Inc.

  19. SSVEP-based BCI for manipulating three-dimensional contents and devices

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Cho, Sungjin; Whang, Mincheol; Ju, Byeong-Kwon; Park, Min-Chul

    2012-06-01

    Brain Computer Interface (BCI) studies have been done to help people manipulate electronic devices in a 2D space but less has been done for a vigorous 3D environment. The purpose of this study was to investigate the possibility of applying Steady State Visual Evoked Potentials (SSVEPs) to a 3D LCD display. Eight subjects (4 females) ranging in age between 20 to 26 years old participated in the experiment. They performed simple navigation tasks on a simple 2D space and virtual environment with/without 3D flickers generated by a Flim-Type Patterned Retarder (FPR). The experiments were conducted in a counterbalanced order. The results showed that 3D stimuli enhanced BCI performance, but no significant effects were found due to the small number of subjects. Visual fatigue that might be evoked by 3D stimuli was negligible in this study. The proposed SSVEP BCI combined with 3D flickers can allow people to control home appliances and other equipment such as wheelchairs, prosthetics, and orthotics without encountering dangerous situations that may happen when using BCIs in real world. 3D stimuli-based SSVEP BCI would motivate people to use 3D displays and vitalize the 3D related industry due to its entertainment value and high performance.

  20. Volumetric imaging of supersonic boundary layers using filtered Rayleigh scattering background suppression

    NASA Technical Reports Server (NTRS)

    Forkey, Joseph N.; Lempert, Walter R.; Bogdonoff, Seymour M.; Miles, Richard B.; Russell, G.

    1995-01-01

    We demonstrate the use of Filtererd Rayleigh Scattering and a 3D reconstruction technique to interrogate the highly three dimensional flow field inside of a supersonic inlet model. A 3 inch by 3 inch by 2.5 inch volume is reconstructed yielding 3D visualizations of the crossing shock waves and of the boundary layer. In this paper we discuss the details of the techniques used, and present the reconstructured 3D images.

  1. Three-dimensional image technology in forensic anthropology: Assessing the validity of biological profiles derived from CT-3D images of the skeleton

    NASA Astrophysics Data System (ADS)

    Garcia de Leon Valenzuela, Maria Julia

    This project explores the reliability of building a biological profile for an unknown individual based on three-dimensional (3D) images of the individual's skeleton. 3D imaging technology has been widely researched for medical and engineering applications, and it is increasingly being used as a tool for anthropological inquiry. While the question of whether a biological profile can be derived from 3D images of a skeleton with the same accuracy as achieved when using dry bones has been explored, bigger sample sizes, a standardized scanning protocol and more interobserver error data are needed before 3D methods can become widely and confidently used in forensic anthropology. 3D images of Computed Tomography (CT) scans were obtained from 130 innominate bones from Boston University's skeletal collection (School of Medicine). For each bone, both 3D images and original bones were assessed using the Phenice and Suchey-Brooks methods. Statistical analysis was used to determine the agreement between 3D image assessment versus traditional assessment. A pool of six individuals with varying experience in the field of forensic anthropology scored a subsample (n = 20) to explore interobserver error. While a high agreement was found for age and sex estimation for specimens scored by the author, the interobserver study shows that observers found it difficult to apply standard methods to 3D images. Higher levels of experience did not result in higher agreement between observers, as would be expected. Thus, a need for training in 3D visualization before applying anthropological methods to 3D bones is suggested. Future research should explore interobserver error using a larger sample size in order to test the hypothesis that training in 3D visualization will result in a higher agreement between scores. The need for the development of a standard scanning protocol focusing on the optimization of 3D image resolution is highlighted. Applications for this research include the possibility of digitizing skeletal collections in order to expand their use and for deriving skeletal collections from living populations and creating population-specific standards. Further research for the development of a standard scanning and processing protocol is needed before 3D methods in forensic anthropology are considered as reliable tools for generating biological profiles.

  2. Using Saliency-Weighted Disparity Statistics for Objective Visual Comfort Assessment of Stereoscopic Images

    NASA Astrophysics Data System (ADS)

    Zhang, Wenlan; Luo, Ting; Jiang, Gangyi; Jiang, Qiuping; Ying, Hongwei; Lu, Jing

    2016-06-01

    Visual comfort assessment (VCA) for stereoscopic images is a particularly significant yet challenging task in 3D quality of experience research field. Although the subjective assessment given by human observers is known as the most reliable way to evaluate the experienced visual discomfort, it is time-consuming and non-systematic. Therefore, it is of great importance to develop objective VCA approaches that can faithfully predict the degree of visual discomfort as human beings do. In this paper, a novel two-stage objective VCA framework is proposed. The main contribution of this study is that the important visual attention mechanism of human visual system is incorporated for visual comfort-aware feature extraction. Specifically, in the first stage, we first construct an adaptive 3D visual saliency detection model to derive saliency map of a stereoscopic image, and then a set of saliency-weighted disparity statistics are computed and combined to form a single feature vector to represent a stereoscopic image in terms of visual comfort. In the second stage, a high dimensional feature vector is fused into a single visual comfort score by performing random forest algorithm. Experimental results on two benchmark databases confirm the superior performance of the proposed approach.

  3. 3D Printing Meets Astrophysics: A New Way to Visualize and Communicate Science

    NASA Astrophysics Data System (ADS)

    Madura, Thomas Ignatius; Steffen, Wolfgang; Clementel, Nicola; Gull, Theodore R.

    2015-08-01

    3D printing has the potential to improve the astronomy community’s ability to visualize, understand, interpret, and communicate important scientific results. I summarize recent efforts to use 3D printing to understand in detail the 3D structure of a complex astrophysical system, the supermassive binary star Eta Carinae and its surrounding bipolar ‘Homunculus’ nebula. Using mapping observations of molecular hydrogen line emission obtained with the ESO Very Large Telescope, we obtained a full 3D model of the Homunculus, allowing us to 3D print, for the first time, a detailed replica of a nebula (Steffen et al. 2014, MNRAS, 442, 3316). I also present 3D prints of output from supercomputer simulations of the colliding stellar winds in the highly eccentric binary located near the center of the Homunculus (Madura et al. 2015, arXiv:1503.00716). These 3D prints, the first of their kind, reveal previously unknown ‘finger-like’ structures at orbital phases shortly after periastron (when the two stars are closest to each other) that protrude outward from the spiral wind-wind collision region. The results of both efforts have received significant media attention in recent months, including two NASA press releases (http://www.nasa.gov/content/goddard/astronomers-bring-the-third-dimension-to-a-doomed-stars-outburst/ and http://www.nasa.gov/content/goddard/nasa-observatories-take-an-unprecedented-look-into-superstar-eta-carinae/), demonstrating the potential of using 3D printing for astronomy outreach and education. Perhaps more importantly, 3D printing makes it possible to bring the wonders of astronomy to new, often neglected, audiences, i.e. the blind and visually impaired.

  4. High-Fidelity Roadway Modeling and Simulation

    NASA Technical Reports Server (NTRS)

    Wang, Jie; Papelis, Yiannis; Shen, Yuzhong; Unal, Ozhan; Cetin, Mecit

    2010-01-01

    Roads are an essential feature in our daily lives. With the advances in computing technologies, 2D and 3D road models are employed in many applications, such as computer games and virtual environments. Traditional road models were generated by professional artists manually using modeling software tools such as Maya and 3ds Max. This approach requires both highly specialized and sophisticated skills and massive manual labor. Automatic road generation based on procedural modeling can create road models using specially designed computer algorithms or procedures, reducing the tedious manual editing needed for road modeling dramatically. But most existing procedural modeling methods for road generation put emphasis on the visual effects of the generated roads, not the geometrical and architectural fidelity. This limitation seriously restricts the applicability of the generated road models. To address this problem, this paper proposes a high-fidelity roadway generation method that takes into account road design principles practiced by civil engineering professionals, and as a result, the generated roads can support not only general applications such as games and simulations in which roads are used as 3D assets, but also demanding civil engineering applications, which requires accurate geometrical models of roads. The inputs to the proposed method include road specifications, civil engineering road design rules, terrain information, and surrounding environment. Then the proposed method generates in real time 3D roads that have both high visual and geometrical fidelities. This paper discusses in details the procedures that convert 2D roads specified in shape files into 3D roads and civil engineering road design principles. The proposed method can be used in many applications that have stringent requirements on high precision 3D models, such as driving simulations and road design prototyping. Preliminary results demonstrate the effectiveness of the proposed method.

  5. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  6. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  7. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  8. Experimental Evidence for Improved Neuroimaging Interpretation Using Three-Dimensional Graphic Models

    ERIC Educational Resources Information Center

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more…

  9. GIS-based interactive tool to map the advent of world conquerors

    NASA Astrophysics Data System (ADS)

    Lakkaraju, Mahesh

    The objective of this thesis is to show the scale and extent of some of the greatest empires the world has ever seen. This is a hybrid project between the GIS based interactive tool and the web-based JavaScript tool. This approach lets the students learn effectively about the emperors themselves while understanding how long and far their empires spread. In the GIS based tool, a map is displayed with various points on it, and when a user clicks on one point, the relevant information of what happened at that particular place is displayed. Apart from this information, users can also select the interactive animation button and can walk through a set of battles in chronological order. As mentioned, this uses Java as the main programming language, and MOJO (Map Objects Java Objects) provided by ESRI. MOJO is very effective as its GIS related features can be included in the application itself. This app. is a simple tool and has been developed for university or high school level students. D3.js is an interactive animation and visualization platform built on the Javascript framework. Though HTML5, CSS3, Javascript and SVG animations can be used to derive custom animations, this tool can help bring out results with less effort and more ease of use. Hence, it has become the most sought after visualization tool for multiple applications. D3.js has provided a map-based visualization feature so that we can easily display text-based data in a map-based interface. To draw the map and the points on it, D3.js uses data rendered in TOPO JSON format. The latitudes and longitudes can be provided, which are interpolated into the Map svg. One of the main advantages of doing it this way is that more information is retained when we use a visual medium.

  10. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  11. Perceived crosstalk assessment on patterned retarder 3D display

    NASA Astrophysics Data System (ADS)

    Zou, Bochao; Liu, Yue; Huang, Yi; Wang, Yongtian

    2014-03-01

    CONTEXT: Nowadays, almost all stereoscopic displays suffer from crosstalk, which is one of the most dominant degradation factors of image quality and visual comfort for 3D display devices. To deal with such problems, it is worthy to quantify the amount of perceived crosstalk OBJECTIVE: Crosstalk measurements are usually based on some certain test patterns, but scene content effects are ignored. To evaluate the perceived crosstalk level for various scenes, subjective test may bring a more correct evaluation. However, it is a time consuming approach and is unsuitable for real­ time applications. Therefore, an objective metric that can reliably predict the perceived crosstalk is needed. A correct objective assessment of crosstalk for different scene contents would be beneficial to the development of crosstalk minimization and cancellation algorithms which could be used to bring a good quality of experience to viewers. METHOD: A patterned retarder 3D display is used to present 3D images in our experiment. By considering the mechanism of this kind of devices, an appropriate simulation of crosstalk is realized by image processing techniques to assign different values of crosstalk to each other between image pairs. It can be seen from the literature that the structures of scenes have a significant impact on the perceived crosstalk, so we first extract the differences of the structural information between original and distorted image pairs through Structural SIMilarity (SSIM) algorithm, which could directly evaluate the structural changes between two complex-structured signals. Then the structural changes of left view and right view are computed respectively and combined to an overall distortion map. Under 3D viewing condition, because of the added value of depth, the crosstalk of pop-out objects may be more perceptible. To model this effect, the depth map of a stereo pair is generated and the depth information is filtered by the distortion map. Moreover, human attention is one of important factors for crosstalk assessment due to the fact that when viewing 3D contents, perceptual salient regions are highly likely to be a major contributor to determining the quality of experience of 3D contents. To take this into account, perceptual significant regions are extracted, and a spatial pooling technique is used to combine structural distortion map, depth map and visual salience map together to predict the perceived crosstalk more precisely. To verify the performance of the proposed crosstalk assessment metric, subjective experiments are conducted with 24 participants viewing and rating 60 simuli (5 scenes * 4 crosstalk levels * 3 camera distances). After an outliers removal and statistical process, the correlation with subjective test is examined using Pearson and Spearman rank-order correlation coefficient. Furthermore, the proposed method is also compared with two traditional 2D metrics, PSNR and SSIM. The objective score is mapped to subjective scale using a nonlinear fitting function to directly evaluate the performance of the metric. RESULIS: After the above-mentioned processes, the evaluation results demonstrate that the proposed metric is highly correlated with the subjective score when compared with the existing approaches. Because the Pearson coefficient of the proposed metric is 90.3%, it is promising for objective evaluation of the perceived crosstalk. NOVELTY: The main goal of our paper is to introduce an objective metric for stereo crosstalk assessment. The novelty contributions are twofold. First, an appropriate simulation of crosstalk by considering the characteristics of patterned retarder 3D display is developed. Second, an objective crosstalk metric based on visual attention model is introduced.

  12. Intracranial MRA: single volume vs. multiple thin slab 3D time-of-flight acquisition.

    PubMed

    Davis, W L; Warnock, S H; Harnsberger, H R; Parker, D L; Chen, C X

    1993-01-01

    Single volume three-dimensional (3D) time-of-flight (TOF) MR angiography is the most commonly used noninvasive method for evaluating the intracranial vasculature. The sensitivity of this technique to signal loss from flow saturation limits its utility. A recently developed multislab 3D TOF technique, MOTSA, is less affected by flow saturation and would therefore be expected to yield improved vessel visualization. To study this hypothesis, intracranial MR angiograms were obtained on 10 volunteers using three techniques: MOTSA, single volume 3D TOF using a standard 4.9 ms TE (3D TOFA), and single volume 3D TOF using a 6.8 ms TE (3D TOFB). All three sets of axial source images and maximum intensity projection (MIP) images were reviewed. Each exam was evaluated for the number of intracranial vessels visualized. A total of 502 vessel segments were studied with each technique. With use of the MIP images, 86% of selected vessels were visualized with MOTSA, 64% with 3D TOFA (TE = 4.9 ms), and 67% with TOFB (TE = 6.8 ms). Similarly, with the axial source images, 91% of selected vessels were visualized with MOTSA, 77% with 3D TOFA (TE = 4.9 ms), and 82% with 3D TOFB (TE = 6.8 ms). There is improved visualization of selected intracranial vessels in normal volunteers with MOTSA as compared with single volume 3D TOF. These improvements are believed to be primarily a result of decreased sensitivity to flow saturation seen with the MOTSA technique. No difference in overall vessel visualization was noted for the two single volume 3D TOF techniques.

  13. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  14. Effects of VR system fidelity on analyzing isosurface visualization of volume datasets.

    PubMed

    Laha, Bireswar; Bowman, Doug A; Socha, John J

    2014-04-01

    Volume visualization is an important technique for analyzing datasets from a variety of different scientific domains. Volume data analysis is inherently difficult because volumes are three-dimensional, dense, and unfamiliar, requiring scientists to precisely control the viewpoint and to make precise spatial judgments. Researchers have proposed that more immersive (higher fidelity) VR systems might improve task performance with volume datasets, and significant results tied to different components of display fidelity have been reported. However, more information is needed to generalize these results to different task types, domains, and rendering styles. We visualized isosurfaces extracted from synchrotron microscopic computed tomography (SR-μCT) scans of beetles, in a CAVE-like display. We ran a controlled experiment evaluating the effects of three components of system fidelity (field of regard, stereoscopy, and head tracking) on a variety of abstract task categories that are applicable to various scientific domains, and also compared our results with those from our prior experiment using 3D texture-based rendering. We report many significant findings. For example, for search and spatial judgment tasks with isosurface visualization, a stereoscopic display provides better performance, but for tasks with 3D texture-based rendering, displays with higher field of regard were more effective, independent of the levels of the other display components. We also found that systems with high field of regard and head tracking improve performance in spatial judgment tasks. Our results extend existing knowledge and produce new guidelines for designing VR systems to improve the effectiveness of volume data analysis.

  15. Visualising Earth's Mantle based on Global Adjoint Tomography

    NASA Astrophysics Data System (ADS)

    Bozdag, E.; Pugmire, D.; Lefebvre, M. P.; Hill, J.; Komatitsch, D.; Peter, D. B.; Podhorszki, N.; Tromp, J.

    2017-12-01

    Recent advances in 3D wave propagation solvers and high-performance computing have enabled regional and global full-waveform inversions. Interpretation of tomographic models is often done on visually. Robust and efficient visualization tools are necessary to thoroughly investigate large model files, particularly at the global scale. In collaboration with Oak Ridge National Laboratory (ORNL), we have developed effective visualization tools and used for visualization of our first-generation global model, GLAD-M15 (Bozdag et al. 2016). VisIt (https://wci.llnl.gov/simulation/computer-codes/visit/) is used for initial exploration of the models and for extraction of seismological features. The broad capability of VisIt, and its demonstrated scalability proved valuable for experimenting with different visualization techniques, and in the creation of timely results. Utilizing VisIt's plugin-architecture, a data reader plugin was developed, which reads the ADIOS (https://www.olcf.ornl.gov/center-projects/adios/) format of our model files. Blender (https://www.blender.org) is used for the setup of lighting, materials, camera paths and rendering of geometry. Python scripting was used to control the orchestration of different geometries, as well as camera animation for 3D movies. While we continue producing 3D contour plots and movies for various seismic parameters to better visualize plume- and slab-like features as well as anisotropy throughout the mantle, our aim is to make visualization an integral part of our global adjoint tomography workflow to routinely produce various 2D cross-sections to facilitate examination of our models after each iteration. This will ultimately form the basis for use of pattern recognition techniques in our investigations. Simulations for global adjoint tomography are performed on ORNL's Titan system and visualization is done in parallel on ORNL's post-processing cluster Rhea.

  16. Visualizing the 3D Architecture of Multiple Erythrocytes Infected with Plasmodium at Nanoscale by Focused Ion Beam-Scanning Electron Microscopy

    PubMed Central

    Soares Medeiros, Lia Carolina; De Souza, Wanderley; Jiao, Chengge; Barrabin, Hector; Miranda, Kildare

    2012-01-01

    Different methods for three-dimensional visualization of biological structures have been developed and extensively applied by different research groups. In the field of electron microscopy, a new technique that has emerged is the use of a focused ion beam and scanning electron microscopy for 3D reconstruction at nanoscale resolution. The higher extent of volume that can be reconstructed with this instrument represent one of the main benefits of this technique, which can provide statistically relevant 3D morphometrical data. As the life cycle of Plasmodium species is a process that involves several structurally complex developmental stages that are responsible for a series of modifications in the erythrocyte surface and cytoplasm, a high number of features within the parasites and the host cells has to be sampled for the correct interpretation of their 3D organization. Here, we used FIB-SEM to visualize the 3D architecture of multiple erythrocytes infected with Plasmodium chabaudi and analyzed their morphometrical parameters in a 3D space. We analyzed and quantified alterations on the host cells, such as the variety of shapes and sizes of their membrane profiles and parasite internal structures such as a polymorphic organization of hemoglobin-filled tubules. The results show the complex 3D organization of Plasmodium and infected erythrocyte, and demonstrate the contribution of FIB-SEM for the obtainment of statistical data for an accurate interpretation of complex biological structures. PMID:22432024

  17. 3-D Teaching Models for All

    ERIC Educational Resources Information Center

    Bradley, Joan; Farland-Smith, Donna

    2010-01-01

    Allowing a student to "see" through touch what other students see through a microscope can be a challenging task. Therefore, author Joan Bradley created three-dimensional (3-D) models with one student's visual impairment in mind. They are meant to benefit all students and can be used to teach common high school biology topics, including the…

  18. Who Benefits from Learning with 3D Models?: The Case of Spatial Ability

    ERIC Educational Resources Information Center

    Huk, T.

    2006-01-01

    Empirical studies that focus on the impact of three-dimensional (3D) visualizations on learning are to date rare and inconsistent. According to the ability-as-enhancer hypothesis, high spatial ability learners should benefit particularly as they have enough cognitive capacity left for mental model construction. In contrast, the…

  19. Blonanserin Ameliorates Phencyclidine-Induced Visual-Recognition Memory Deficits: the Complex Mechanism of Blonanserin Action Involving D3-5-HT2A and D1-NMDA Receptors in the mPFC

    PubMed Central

    Hida, Hirotake; Mouri, Akihiro; Mori, Kentaro; Matsumoto, Yurie; Seki, Takeshi; Taniguchi, Masayuki; Yamada, Kiyofumi; Iwamoto, Kunihiro; Ozaki, Norio; Nabeshima, Toshitaka; Noda, Yukihiro

    2015-01-01

    Blonanserin differs from currently used serotonin 5-HT2A/dopamine-D2 receptor antagonists in that it exhibits higher affinity for dopamine-D2/3 receptors than for serotonin 5-HT2A receptors. We investigated the involvement of dopamine-D3 receptors in the effects of blonanserin on cognitive impairment in an animal model of schizophrenia. We also sought to elucidate the molecular mechanism underlying this involvement. Blonanserin, as well as olanzapine, significantly ameliorated phencyclidine (PCP)-induced impairment of visual-recognition memory, as demonstrated by the novel-object recognition test (NORT) and increased extracellular dopamine levels in the medial prefrontal cortex (mPFC). With blonanserin, both of these effects were antagonized by DOI (a serotonin 5-HT2A receptor agonist) and 7-OH-DPAT (a dopamine-D3 receptor agonist), whereas the effects of olanzapine were antagonized by DOI but not by 7-OH-DPAT. The ameliorating effect was also antagonized by SCH23390 (a dopamine-D1 receptor antagonist) and H-89 (a protein kinase A (PKA) inhibitor). Blonanserin significantly remediated the decrease in phosphorylation levels of PKA at Thr197 and of NR1 (an essential subunit of N-methyl-D-aspartate (NMDA) receptors) at Ser897 by PKA in the mPFC after a NORT training session in the PCP-administered mice. There were no differences in the levels of NR1 phosphorylated at Ser896 by PKC in any group. These results suggest that the ameliorating effect of blonanserin on PCP-induced cognitive impairment is associated with indirect functional stimulation of the dopamine-D1-PKA-NMDA receptor pathway following augmentation of dopaminergic neurotransmission due to inhibition of both dopamine-D3 and serotonin 5-HT2A receptors in the mPFC. PMID:25120077

  20. Screw Placement Accuracy for Minimally Invasive Transforaminal Lumbar Interbody Fusion Surgery: A Study on 3-D Neuronavigation-Guided Surgery

    PubMed Central

    Torres, Jorge; James, Andrew R.; Alimi, Marjan; Tsiouris, Apostolos John; Geannette, Christian; Härtl, Roger

    2012-01-01

    Purpose The aim of this study was to assess the impact of 3-D navigation for pedicle screw placement accuracy in minimally invasive transverse lumbar interbody fusion (MIS-TLIF). Methods A retrospective review of 52 patients who had MIS-TLIF assisted with 3D navigation is presented. Clinical outcomes were assessed with the Oswestry Disability Index (ODI), Visual Analog Scales (VAS), and MacNab scores. Radiographic outcomes were assessed using X-rays and thin-slice computed tomography. Result The mean age was 56.5 years, and 172 screws were implanted with 16 pedicle breaches (91.0% accuracy rate). Radiographic fusion rate at a mean follow-up of 15.6 months was 87.23%. No revision surgeries were required. The mean improvement in the VAS back pain, VAS leg pain, and ODI at 11.3 months follow-up was 4.3, 4.5, and 26.8 points, respectively. At last follow-up the mean postoperative disc height gain was 4.92 mm and the mean postoperative disc angle gain was 2.79 degrees. At L5–S1 level, there was a significant correlation between a greater disc space height gain and a lower VAS leg score. Conclusion Our data support that application of 3-D navigation in MIS-TLIF is associated with a high level of accuracy in the pedicle screw placement. PMID:24353961

  1. Interrogation of patient data delivered to the operating theatre during hepato-pancreatic surgery using high-performance computing.

    PubMed

    John, Nigel W; McCloy, Rory F; Herrman, Simone

    2004-01-01

    The Op3D visualization system allows, for the first time, a surgeon in the operating theatre to interrogate patient-specific medical data sets rendered in three dimensions using high-performance computing. The hypothesis of this research is that the success rate of hepato-pancreatic surgical resections can be improved by replacing the light box with an interactive 3D representation of the medical data in the operating theatre. A laptop serves as the client computer and an easy-to-use interface has been developed for the surgeon to interact with and interrogate the patient data. To date, 16 patients have had 3D reconstructions of their DICOM data sets, including preoperative interrogation and planning of surgery. Interrogation of the 3D images live in theatre and comparison with the surgeons' operative findings (including intraoperative ultrasound) led to the operation being abandoned in 25% of cases, adoption of an alternative surgical approach in 25% of cases, and helpful image guidance for successful resection in 50% of cases. The clinical value of the latest generation of scanners and digital imaging techniques cannot be realized unless appropriate dissemination of the images takes place. This project has succeeded in translating the image technology into a user-friendly form and delivers 3D reconstructions of patient-specific data to the "sharp end"-the surgeon undertaking the tumor resection in theatre, in a manner that allows interaction and interpretation. More time interrogating the 3D data sets preoperatively would help reduce the incidence of abandoned operations-this is part of the surgeons' learning curve. We have developed one of the first practical applications to benefit from remote visualization, and certainly the first medical visualization application of this kind.

  2. 3D documentation and visualization of external injury findings by integration of simple photography in CT/MRI data sets (IprojeCT).

    PubMed

    Campana, Lorenzo; Breitbeck, Robert; Bauer-Kreuz, Regula; Buck, Ursula

    2016-05-01

    This study evaluated the feasibility of documenting patterned injury using three dimensions and true colour photography without complex 3D surface documentation methods. This method is based on a generated 3D surface model using radiologic slice images (CT) while the colour information is derived from photographs taken with commercially available cameras. The external patterned injuries were documented in 16 cases using digital photography as well as highly precise photogrammetry-supported 3D structured light scanning. The internal findings of these deceased were recorded using CT and MRI. For registration of the internal with the external data, two different types of radiographic markers were used and compared. The 3D surface model generated from CT slice images was linked with the photographs, and thereby digital true-colour 3D models of the patterned injuries could be created (Image projection onto CT/IprojeCT). In addition, these external models were merged with the models of the somatic interior. We demonstrated that 3D documentation and visualization of external injury findings by integration of digital photography in CT/MRI data sets is suitable for the 3D documentation of individual patterned injuries to a body. Nevertheless, this documentation method is not a substitution for photogrammetry and surface scanning, especially when the entire bodily surface is to be recorded in three dimensions including all external findings, and when precise data is required for comparing highly detailed injury features with the injury-inflicting tool.

  3. Hierarchical streamline bundles.

    PubMed

    Yu, Hongfeng; Wang, Chaoli; Shene, Ching-Kuang; Chen, Jacqueline H

    2012-08-01

    Effective 3D streamline placement and visualization play an essential role in many science and engineering disciplines. The main challenge for effective streamline visualization lies in seed placement, i.e., where to drop seeds and how many seeds should be placed. Seeding too many or too few streamlines may not reveal flow features and patterns either because it easily leads to visual clutter in rendering or it conveys little information about the flow field. Not only does the number of streamlines placed matter, their spatial relationships also play a key role in understanding the flow field. Therefore, effective flow visualization requires the streamlines to be placed in the right place and in the right amount. This paper introduces hierarchical streamline bundles, a novel approach to simplifying and visualizing 3D flow fields defined on regular grids. By placing seeds and generating streamlines according to flow saliency, we produce a set of streamlines that captures important flow features near critical points without enforcing the dense seeding condition. We group spatially neighboring and geometrically similar streamlines to construct a hierarchy from which we extract streamline bundles at different levels of detail. Streamline bundles highlight multiscale flow features and patterns through clustered yet not cluttered display. This selective visualization strategy effectively reduces visual clutter while accentuating visual foci, and therefore is able to convey the desired insight into the flow data.

  4. Tension and compression fatigue response of unnotched 3D braided composites

    NASA Technical Reports Server (NTRS)

    Portanova, M. A.

    1992-01-01

    The unnotched compression and tension fatigue response of a 3-D braided composite was measured. Both gross compressive stress and tensile stress were plotted against cycles to failure to evaluate the fatigue life of these materials. Damage initiation and growth was monitored visually and by tracking compliance change during cycle loading. The intent was to establish by what means the strength of a 3-D architecture will start to degrade, at what point will it degrade beyond an acceptable level, and how this material will typically fail.

  5. Cosmic sculpture: a new way to visualise the cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Clements, D. L.; Sato, S.; Portela Fonseca, A.

    2017-01-01

    3D printing presents an attractive alternative to visual representation of physical datasets such as astronomical images that can be used for research, outreach or teaching purposes, and is especially relevant to people with a visual disability. We here report the use of 3D printing technology to produce a representation of the all-sky cosmic microwave background (CMB) intensity anisotropy maps produced by the Planck mission. The success of this work in representing key features of the CMB is discussed as is the potential of this approach for representing other astrophysical data sets. 3D printing such datasets represents a highly complementary approach to the usual 2D projections used in teaching and outreach work, and can also form the basis of undergraduate projects. The CAD files used to produce the models discussed in this paper are made available.

  6. Laser-assisted in situ keratomileusis with optimized, fast-repetition, and cyclotorsion control excimer laser to treat hyperopic astigmatism with high cylinder.

    PubMed

    Alió Del Barrio, Jorge L; Tiveron, Mauro; Plaza-Puche, Ana B; Amesty, María A; Casanova, Laura; García, María J; Alió, Jorge L

    2017-10-18

    To evaluate the visual outcomes after femtosecond laser-assisted laser in situ keratomileusis (LASIK) surgery to correct primary compound hyperopic astigmatism with high cylinder using a fast repetition rate excimer laser platform with optimized aspheric profiles and cyclotorsion control. Eyes with primary simple or compound hyperopic astigmatism and a cylinder power ≥3.00 D had uneventful femtosecond laser-assisted LASIK with a fast repetition rate excimer laser ablation, aspheric profiles, and cyclotorsion control. Visual, refractive, and aberrometric results were evaluated at the 3- and 6-month follow-up. The astigmatic outcome was evaluated using the Alpins method and ASSORT software. This study enrolled 80 eyes at 3 months and 50 eyes at 6 months. The significant reduction in refractive sphere and cylinder 3 and 6 months postoperatively (p<0.01) was associated with an improved uncorrected distance visual acuity (p<0.01). A total of 23.75% required retreatment 3 months after surgery. Efficacy and safety indices at 6 months were 0.90 and 1.00, respectively. At 6 months, 80% of eyes had an SE within ±0.50 D and 96% within ±1.00 D. No significant differences were detected between the third and the sixth postoperative months in refractive parameters. A significant increase in the spherical aberration was detected, but not in coma. The correction index was 0.94 at 3 months. Laser in situ keratomileusis for primary compound hyperopic astigmatism with high cylinder (>3.00 D) using the latest excimer platforms with cyclotorsion control, fast repetition rate, and optimized aspheric profiles is safe, moderately effective, and predictable.

  7. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.

  8. Lidar Investigation of Infiltration Water Heterogeneity in the Tamala Limestone, SW WA

    NASA Astrophysics Data System (ADS)

    Mahmud, K.; Mariethoz, G.; Treble, P. C.; Baker, A.

    2014-12-01

    To better manage groundwater resources in carbonate areas and improve our understanding of speleothem archives, it is important to understand and predict unsaturated zone hydrology in karst. The high level of complexity and spatial heterogeneity of such systems is challenging and requires knowledge of the typical geometry of karstic features. We present an exhaustive characterization of Golgotha Cave, SW Western Australia, based on an extensive LIDAR measurement campaign. The cave is developed in Quaternary age aeolianite (dune limestone) and contains speleothem records. We collect 30 representative 3D scan images from this site using FARO Focus3D, a high-speed 3D laser scanner, to visualize, study and extract 2D and 3D information from various points of view and at different scales. In addition to LIDAR data, 32 automatic drip loggers are installed at this site to measure the distribution and volume of water flow. We perform mathematical morphological analyses on the cave ceiling, to determine statistical information regarding the stalactites widths, lengths and spatial distribution. We determine a relationship between stalactites diameter and length. We perform tests for randomness to investigate the relationship between stalactite distribution and ceiling features such as fractures and apply this to identify different types of possible flow patterns such as fracture flow, solution pipe flow, primary matrix flow etc. We also relate stalactites density variation with topography of the cave ceiling which shows hydraulic gradient deviations. Finally we use Image Quilting, one of the recently developed multiple-point geostatistics methods, with the training images derived from LIDAR data to create a larger cave system to represent not only the caves that are visible, but the entire system which is inaccessible. As a result, an integral geological model is generated which may allow other scientists, geologist, to work on two different levels, integrating different speleothem datasets: (1) a basic level based on the accurate and metric support provided by the laser scanner; and (2) an advanced level using the image-based modelling.

  9. First responder tracking and visualization for command and control toolkit

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Petrov, Plamen; Meisinger, Roger

    2010-04-01

    In order for First Responder Command and Control personnel to visualize incidents at urban building locations, DHS sponsored a small business research program to develop a tool to visualize 3D building interiors and movement of First Responders on site. 21st Century Systems, Inc. (21CSI), has developed a toolkit called Hierarchical Grid Referenced Normalized Display (HiGRND). HiGRND utilizes three components to provide a full spectrum of visualization tools to the First Responder. First, HiGRND visualizes the structure in 3D. Utilities in the 3D environment allow the user to switch between views (2D floor plans, 3D spatial, evacuation routes, etc.) and manually edit fast changing environments. HiGRND accepts CAD drawings and 3D digital objects and renders these in the 3D space. Second, HiGRND has a First Responder tracker that uses the transponder signals from First Responders to locate them in the virtual space. We use the movements of the First Responder to map the interior of structures. Finally, HiGRND can turn 2D blueprints into 3D objects. The 3D extruder extracts walls, symbols, and text from scanned blueprints to create the 3D mesh of the building. HiGRND increases the situational awareness of First Responders and allows them to make better, faster decisions in critical urban situations.

  10. Shadow-driven 4D haptic visualization.

    PubMed

    Zhang, Hui; Hanson, Andrew

    2007-01-01

    Just as we can work with two-dimensional floor plans to communicate 3D architectural design, we can exploit reduced-dimension shadows to manipulate the higher-dimensional objects generating the shadows. In particular, by taking advantage of physically reactive 3D shadow-space controllers, we can transform the task of interacting with 4D objects to a new level of physical reality. We begin with a teaching tool that uses 2D knot diagrams to manipulate the geometry of 3D mathematical knots via their projections; our unique 2D haptic interface allows the user to become familiar with sketching, editing, exploration, and manipulation of 3D knots rendered as projected imageson a 2D shadow space. By combining graphics and collision-sensing haptics, we can enhance the 2D shadow-driven editing protocol to successfully leverage 2D pen-and-paper or blackboard skills. Building on the reduced-dimension 2D editing tool for manipulating 3D shapes, we develop the natural analogy to produce a reduced-dimension 3D tool for manipulating 4D shapes. By physically modeling the correct properties of 4D surfaces, their bending forces, and their collisions in the 3D haptic controller interface, we can support full-featured physical exploration of 4D mathematical objects in a manner that is otherwise far beyond the experience accessible to human beings. As far as we are aware, this paper reports the first interactive system with force-feedback that provides "4D haptic visualization" permitting the user to model and interact with 4D cloth-like objects.

  11. Influence of Eddy Current, Maxwell and Gradient Field Corrections on 3D Flow Visualization of 3D CINE PC-MRI Data

    PubMed Central

    Lorenz, R.; Bock, J.; Snyder, J.; Korvink, J.G.; Jung, B.A.; Markl, M.

    2013-01-01

    Purpose The measurement of velocities based on PC-MRI can be subject to different phase offset errors which can affect the accuracy of velocity data. The purpose of this study was to determine the impact of these inaccuracies and to evaluate different correction strategies on 3D visualization. Methods PC-MRI was performed on a 3 T system (Siemens Trio) for in vitro (curved/straight tube models; venc: 0.3 m/s) and in vivo (aorta/intracranial vasculature; venc: 1.5/0.4 m/s) data. For comparison of the impact of different magnetic field gradient designs, in vitro data was additionally acquired on a wide bore 1.5 T system (Siemens Espree). Different correction methods were applied to correct for eddy currents, Maxwell terms and gradient field inhomogeneities. Results The application of phase offset correction methods lead to an improvement of 3D particle trace visualization and count. The most pronounced differences were found for in vivo/in vitro data (68%/82% more particle traces) acquired with a low venc (0.3 m/s/0.4 m/s, respectively). In vivo data acquired with high venc (1.5 m/s) showed noticeable but only minor improvement. Conclusion This study suggests that the correction of phase offset errors can be important for a more reliable visualization of particle traces but is strongly dependent on the velocity sensitivity, object geometry, and gradient coil design. PMID:24006013

  12. Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Baek, Sangwook; Lee, Chulhee

    2015-03-01

    In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.

  13. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  14. Analysis, Mining and Visualization Service at NCSA

    NASA Astrophysics Data System (ADS)

    Wilhelmson, R.; Cox, D.; Welge, M.

    2004-12-01

    NCSA's goal is to create a balanced system that fully supports high-end computing as well as: 1) high-end data management and analysis; 2) visualization of massive, highly complex data collections; 3) large databases; 4) geographically distributed Grid computing; and 5) collaboratories, all based on a secure computational environment and driven with workflow-based services. To this end NCSA has defined a new technology path that includes the integration and provision of cyberservices in support of data analysis, mining, and visualization. NCSA has begun to develop and apply a data mining system-NCSA Data-to-Knowledge (D2K)-in conjunction with both the application and research communities. NCSA D2K will enable the formation of model-based application workflows and visual programming interfaces for rapid data analysis. The Java-based D2K framework, which integrates analytical data mining methods with data management, data transformation, and information visualization tools, will be configurable from the cyberservices (web and grid services, tools, ..) viewpoint to solve a wide range of important data mining problems. This effort will use modules, such as a new classification methods for the detection of high-risk geoscience events, and existing D2K data management, machine learning, and information visualization modules. A D2K cyberservices interface will be developed to seamlessly connect client applications with remote back-end D2K servers, providing computational resources for data mining and integration with local or remote data stores. This work is being coordinated with SDSC's data and services efforts. The new NCSA Visualization embedded workflow environment (NVIEW) will be integrated with D2K functionality to tightly couple informatics and scientific visualization with the data analysis and management services. Visualization services will access and filter disparate data sources, simplifying tasks such as fusing related data from distinct sources into a coherent visual representation. This approach enables collaboration among geographically dispersed researchers via portals and front-end clients, and the coupling with data management services enables recording associations among datasets and building annotation systems into visualization tools and portals, giving scientists a persistent, shareable, virtual lab notebook. To facilitate provision of these cyberservices to the national community, NCSA will be providing a computational environment for large-scale data assimilation, analysis, mining, and visualization. This will be initially implemented on the new 512 processor shared memory SGI's recently purchased by NCSA. In addition to standard batch capabilities, NCSA will provide on-demand capabilities for those projects requiring rapid response (e.g., development of severe weather, earthquake events) for decision makers. It will also be used for non-sequential interactive analysis of data sets where it is important have access to large data volumes over space and time.

  15. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  16. 3D visualization of ultra-fine ICON climate simulation data

    NASA Astrophysics Data System (ADS)

    Röber, Niklas; Spickermann, Dela; Böttinger, Michael

    2016-04-01

    Advances in high performance computing and model development allow the simulation of finer and more detailed climate experiments. The new ICON model is based on an unstructured triangular grid and can be used for a wide range of applications, ranging from global coupled climate simulations down to very detailed and high resolution regional experiments. It consists of an atmospheric and an oceanic component and scales very well for high numbers of cores. This allows us to conduct very detailed climate experiments with ultra-fine resolutions. ICON is jointly developed in partnership with DKRZ by the Max Planck Institute for Meteorology and the German Weather Service. This presentation discusses our current workflow for analyzing and visualizing this high resolution data. The ICON model has been used for eddy resolving (<10km) ocean simulations, as well as for ultra-fine cloud resolving (120m) atmospheric simulations. This results in very large 3D time dependent multi-variate data that need to be displayed and analyzed. We have developed specific plugins for the free available visualization software ParaView and Vapor, which allows us to read and handle that much data. Within ParaView, we can additionally compare prognostic variables with performance data side by side to investigate the performance and scalability of the model. With the simulation running in parallel on several hundred nodes, an equal load balance is imperative. In our presentation we show visualizations of high-resolution ICON oceanographic and HDCP2 atmospheric simulations that were created using ParaView and Vapor. Furthermore we discuss our current efforts to improve our visualization capabilities, thereby exploring the potential of regular in-situ visualization, as well as of in-situ compression / post visualization.

  17. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    NASA Astrophysics Data System (ADS)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  18. LIME: 3D visualisation and interpretation of virtual geoscience models

    NASA Astrophysics Data System (ADS)

    Buckley, Simon; Ringdal, Kari; Dolva, Benjamin; Naumann, Nicole; Kurz, Tobias

    2017-04-01

    Three-dimensional and photorealistic acquisition of surface topography, using methods such as laser scanning and photogrammetry, has become widespread across the geosciences over the last decade. With recent innovations in photogrammetric processing software, robust and automated data capture hardware, and novel sensor platforms, including unmanned aerial vehicles, obtaining 3D representations of exposed topography has never been easier. In addition to 3D datasets, fusion of surface geometry with imaging sensors, such as multi/hyperspectral, thermal and ground-based InSAR, and geophysical methods, create novel and highly visual datasets that provide a fundamental spatial framework to address open geoscience research questions. Although data capture and processing routines are becoming well-established and widely reported in the scientific literature, challenges remain related to the analysis, co-visualisation and presentation of 3D photorealistic models, especially for new users (e.g. students and scientists new to geomatics methods). Interpretation and measurement is essential for quantitative analysis of 3D datasets, and qualitative methods are valuable for presentation purposes, for planning and in education. Motivated by this background, the current contribution presents LIME, a lightweight and high performance 3D software for interpreting and co-visualising 3D models and related image data in geoscience applications. The software focuses on novel data integration and visualisation of 3D topography with image sources such as hyperspectral imagery, logs and interpretation panels, geophysical datasets and georeferenced maps and images. High quality visual output can be generated for dissemination purposes, to aid researchers with communication of their research results. The background of the software is described and case studies from outcrop geology, in hyperspectral mineral mapping and geophysical-geospatial data integration are used to showcase the novel methods developed.

  19. Ammonia Levels and Urine-Spot Characteristics as Cage-Change Indicators for High-Density Individually Ventilated Mouse Cages

    PubMed Central

    Washington, Ida M; Payton, Mark E

    2016-01-01

    Mouse cage and bedding changes are potentially stressful to mice and are also labor- and resource-intensive. These changes are often performed on a calendar-based schedule to maintain a clean microenvironment and limit the concentrations of ammonia to which mice and workers are exposed. The current study sought to establish a performance-based approach to mouse cage-changing that uses urine spot characteristics as visual indicators of intracage ammonia levels. Colorimetric ammonia indicators were used to measure ammonia levels in individually-ventilated cages (IVC) housing male or female mice (n =5 per cage) of various strains at 1 to 16 d after cage change. Urine spot characteristics were correlated with ammonia levels to create a visual indicator of the cage-change criterion of 25 ppm ammonia. Results demonstrated a consistent increase in ammonia levels with days since cage change, with cages reaching the cage-change criterion at approximately 10 d for IVC containing male mice and 16 d for those with female mice. Ammonia levels were higher for male than female mice but were not correlated with mouse age. However, urine spot diameter, color, and edge characteristics were strongly correlated with ammonia levels. Husbandry practices based on using urine spot characteristics as indicators of ammonia levels led to fewer weekly cage changes and concomitant savings in labor and resources. Therefore, urine spot characteristics can be used as visual indicators of intracage ammonia levels for use of a performance (urine spot)-based approach to cage-changing frequency that maintains animal health and wellbeing. PMID:27177558

  20. Effect of Myopic Defocus on Visual Acuity after Phakic Intraocular Lens Implantation and Wavefront-guided Laser in Situ Keratomileusis

    PubMed Central

    Kamiya, Kazutaka; Shimizu, Kimiya; Igarashi, Akihito; Kawamorita, Takushi

    2015-01-01

    This study aimed to investigate the effect of myopic defocus on visual acuity after phakic intraocular lens (IOL) implantation and wavefront-guided laser in situ keratomileusis (wfg-LASIK). Our prospective study comprised thirty eyes undergoing posterior chamber phakic IOL implantation and 30 eyes undergoing wfg-LASIK. We randomly measured visual acuity under myopic defocus after cycloplegic and non-cycloplegic correction. We also calculated the modulation transfer function by optical simulation and estimated visual acuity from Campbell & Green’s retinal threshold curve. Visual acuity in the phakic IOL group was significantly better than that in the wfg-LASIK group at myopic defocus levels of 0, –1, and –2 D (p < 0.001, p < 0.001, and p = 0.02, Mann-Whitney U-test), but not at a defocus of –3 D (p = 0.30). Similar results were also obtained in a cycloplegic condition. Decimal visual acuity values at a myopic defocus of 0, −1, −2, and -3 D by optical simulation were estimated to be 1.95, 1.21, 0.97, and 0.75 in the phakic IOL group, and 1.39, 1.11, 0.94, and 0.71 in the wfg-LASIK group, respectively. From clinical and optical viewpoints, phakic IOL implantation was superior to wfg-LASIK in terms of the postoperative visual performance, even in the presence of low to moderate myopic regression. PMID:25994984

  1. The next evolution in radioguided surgery: breast cancer related sentinel node localization using a freehandSPECT-mobile gamma camera combination

    PubMed Central

    Engelen, Thijs; Winkel, Beatrice MF; Rietbergen, Daphne DD; KleinJan, Gijs H; Vidal-Sicart, Sergi; Olmos, Renato A Valdés; van den Berg, Nynke S; van Leeuwen, Fijs WB

    2015-01-01

    Accurate pre- and intraoperative identification of the sentinel node (SN) forms the basis of the SN biopsy procedure. Gamma tracing technologies such as a gamma probe (GP), a 2D mobile gamma camera (MGC) or 3D freehandSPECT (FHS) can be used to provide the surgeon with radioguidance to the SN(s). We reasoned that integrated use of these technologies results in the generation of a “hybrid” modality that combines the best that the individual radioguidance technologies have to offer. The sensitivity and resolvability of both 2D-MGC and 3D-FHS-MGC were studied in a phantom setup (at various source-detector depths and using varying injection site-to-SN distances), and in ten breast cancer patients scheduled for SN biopsy. Acquired 3D-FHS-MGC images were overlaid with the position of the phantom/patient. This augmented-reality overview image was then used for navigation to the hotspot/SN in virtual-reality using the GP. Obtained results were compared to conventional gamma camera lymphoscintigrams. Resolution of 3D-FHS-MGC allowed identification of the SNs at a minimum injection site (100 MBq)-to-node (1 MBq; 1%) distance of 20 mm, up to a source-detector depth of 36 mm in 2D-MGC and up to 24 mm in 3D-FHS-MGC. A clinically relevant dose of approximately 1 MBq was clearly detectable up to a depth of 60 mm in 2D-MGC and 48 mm in 3D-FHS-MGC. In all ten patients at least one SN was visualized on the lymphoscintigrams with a total of 12 SNs visualized. 3D-FHS-MGC identified 11 of 12 SNs and allowed navigation to all these visualized SNs; in one patient with two axillary SNs located closely to each other (11 mm), 3D-FHS-MGC was not able to distinguish the two SNs. In conclusion, high sensitivity detection of SNs at an injection site-to-node distance of 20 mm-and-up was possible using 3D-FHS-MGC. In patients, 3D-FHS-MGC showed highly reproducible images as compared to the conventional lymphoscintigrams. PMID:26069857

  2. D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges

    NASA Astrophysics Data System (ADS)

    Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.

    2017-11-01

    3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.

  3. Rapid fusion of 2D X-ray fluoroscopy with 3D multislice CT for image-guided electrophysiology procedures

    NASA Astrophysics Data System (ADS)

    Zagorchev, Lyubomir; Manzke, Robert; Cury, Ricardo; Reddy, Vivek Y.; Chan, Raymond C.

    2007-03-01

    Interventional cardiac electrophysiology (EP) procedures are typically performed under X-ray fluoroscopy for visualizing catheters and EP devices relative to other highly-attenuating structures such as the thoracic spine and ribs. These projections do not however contain information about soft-tissue anatomy and there is a recognized need for fusion of conventional fluoroscopy with pre-operatively acquired cardiac multislice computed tomography (MSCT) volumes. Rapid 2D-3D integration in this application would allow for real-time visualization of all catheters present within the thorax in relation to the cardiovascular anatomy visible in MSCT. We present a method for rapid fusion of 2D X-ray fluoroscopy with 3DMSCT that can facilitate EP mapping and interventional procedures by reducing the need for intra-operative contrast injections to visualize heart chambers and specialized systems to track catheters within the cardiovascular anatomy. We use hardware-accelerated ray-casting to compute digitally reconstructed radiographs (DRRs) from the MSCT volume and iteratively optimize the rigid-body pose of the volumetric data to maximize the similarity between the MSCT-derived DRR and the intra-operative X-ray projection data.

  4. Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach.

    PubMed

    Hadwiger, M; Beyer, J; Jeong, Won-Ki; Pfister, H

    2012-12-01

    This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.

  5. Laser in situ keratomileusis using optimized aspheric profiles and cyclotorsion control to treat compound myopic astigmatism with high cylinder.

    PubMed

    Alió, Jorge L; Plaza-Puche, Ana B; Martinez, Lorena M; Torky, Magda; Brenner, Luis F

    2013-01-01

    To evaluate the visual outcomes after laser in situ keratomileusis (LASIK) surgery to correct primary compound myopic astigmatism with high cylinder performed using a fast-repetition-rate excimer laser platform with optimized aspheric profiles and cyclotorsion control. Vissum Corporation and Division of Ophthalmology, Universidad Miguel Hernández, Alicante, Spain. Retrospective consecutive observational nonrandomized noncomparative case series. Eyes with primary compound myopic astigmatism and a cylinder power over 3.00 diopters (D) had uneventful LASIK with femtosecond flap creation and fast-repetition-rate excimer laser ablation with aspheric profiles and cyclotorsion control. Visual, refractive, and aberrometric outcomes were evaluated at the 6-month follow-up. The astigmatic correction was evaluated using the Alpins method and Assort software. The study enrolled 37 eyes (29 patients; age range 19 to 55 years). The significant reduction in refractive sphere and cylinder 3 months and 6 months postoperatively (P<.01) was associated with improved uncorrected distance visual acuity (P<.01). Eighty-seven percent of eyes had a spherical equivalent within ±0.50 D; 7.5% of eyes were retreated. There was no significant induction of higher-order aberrations (HOAs). The targeted and surgically induced astigmatism magnitudes were 3.23 D and 2.96 D, respectively, and the correction index was 0.91. The safety and efficacy indices were 1.05 and 0.95, respectively. Laser in situ keratomileusis for primary compound myopic astigmatism with high cylinder (>3.00 D) performed using a fast-repetition-rate excimer laser with optimized aspheric profiles and cyclotorsion control was safe, effective, and predictable and did not cause significant induction of HOAs. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  6. DSSR-enhanced visualization of nucleic acid structures in Jmol

    PubMed Central

    Hanson, Robert M.

    2017-01-01

    Abstract Sophisticated and interactive visualizations are essential for making sense of the intricate 3D structures of macromolecules. For proteins, secondary structural components are routinely featured in molecular graphics visualizations. However, the field of RNA structural bioinformatics is still lagging behind; for example, current molecular graphics tools lack built-in support even for base pairs, double helices, or hairpin loops. DSSR (Dissecting the Spatial Structure of RNA) is an integrated and automated command-line tool for the analysis and annotation of RNA tertiary structures. It calculates a comprehensive and unique set of features for characterizing RNA, as well as DNA structures. Jmol is a widely used, open-source Java viewer for 3D structures, with a powerful scripting language. JSmol, its reincarnation based on native JavaScript, has a predominant position in the post Java-applet era for web-based visualization of molecular structures. The DSSR-Jmol integration presented here makes salient features of DSSR readily accessible, either via the Java-based Jmol application itself, or its HTML5-based equivalent, JSmol. The DSSR web service accepts 3D coordinate files (in mmCIF or PDB format) initiated from a Jmol or JSmol session and returns DSSR-derived structural features in JSON format. This seamless combination of DSSR and Jmol/JSmol brings the molecular graphics of 3D RNA structures to a similar level as that for proteins, and enables a much deeper analysis of structural characteristics. It fills a gap in RNA structural bioinformatics, and is freely accessible (via the Jmol application or the JSmol-based website http://jmol.x3dna.org). PMID:28472503

  7. On Wings: Aerodynamics of Eagles.

    ERIC Educational Resources Information Center

    Millson, David

    2000-01-01

    The Aerodynamics Wing Curriculum is a high school program that combines basic physics, aerodynamics, pre-engineering, 3D visualization, computer-assisted drafting, computer-assisted manufacturing, production, reengineering, and success in a 15-hour, 3-week classroom module. (JOW)

  8. Visual discomfort while watching stereoscopic three-dimensional movies at the cinema.

    PubMed

    Zeri, Fabrizio; Livi, Stefano

    2015-05-01

    This study investigates discomfort symptoms while watching Stereoscopic three-dimensional (S3D) movies in the 'real' condition of a cinema. In particular, it had two main objectives: to evaluate the presence and nature of visual discomfort while watching S3D movies, and to compare visual symptoms during S3D and 2D viewing. Cinema spectators of S3D or 2D films were interviewed by questionnaire at the theatre exit of different multiplex cinemas immediately after viewing a movie. A total of 854 subjects were interviewed (mean age 23.7 ± 10.9 years; range 8-81 years; 392 females and 462 males). Five hundred and ninety-nine of them viewed different S3D movies, and 255 subjects viewed a 2D version of a film seen in S3D by 251 subjects from the S3D group for a between-subjects design for that comparison. Exploratory factor analysis revealed two factors underlying symptoms: External Symptoms Factors (ESF) with a mean ± S.D. symptom score of 1.51 ± 0.58 comprised of eye burning, eye ache, eye strain, eye irritation and tearing; and Internal Symptoms Factors (ISF) with a mean ± S.D. symptom score of 1.38 ± 0.51 comprised of blur, double vision, headache, dizziness and nausea. ISF and ESF were significantly correlated (Spearman r = 0.55; p = 0.001) but with external symptoms significantly higher than internal ones (Wilcoxon Signed-ranks test; p = 0.001). The age of participants did not significantly affect symptoms. However, females had higher scores than males for both ESF and ISF, and myopes had higher ISF scores than hyperopes. Newly released movies provided lower ESF scores than older movies, while the seat position of spectators had minimal effect. Symptoms while viewing S3D movies were significantly and negatively correlated to the duration of wearing S3D glasses. Kruskal-Wallis results showed that symptoms were significantly greater for S3D compared to those of 2D movies, both for ISF (p = 0.001) and for ESF (p = 0.001). In short, the analysis of the symptoms experienced by S3D movie spectators based on retrospective visual comfort assessments, showed a higher level of external symptoms (eye burning, eye ache, tearing, etc.) when compared to the internal ones that are typically more perceptual (blurred vision, double vision, headache, etc.). Furthermore, spectators of S3D movies reported statistically higher symptoms when compared to 2D spectators. © 2015 The Authors Ophthalmic & Physiological Optics © 2015 The College of Optometrists.

  9. A software tool for automatic classification and segmentation of 2D/3D medical images

    NASA Astrophysics Data System (ADS)

    Strzelecki, Michal; Szczypinski, Piotr; Materka, Andrzej; Klepaczko, Artur

    2013-02-01

    Modern medical diagnosis utilizes techniques of visualization of human internal organs (CT, MRI) or of its metabolism (PET). However, evaluation of acquired images made by human experts is usually subjective and qualitative only. Quantitative analysis of MR data, including tissue classification and segmentation, is necessary to perform e.g. attenuation compensation, motion detection, and correction of partial volume effect in PET images, acquired with PET/MR scanners. This article presents briefly a MaZda software package, which supports 2D and 3D medical image analysis aiming at quantification of image texture. MaZda implements procedures for evaluation, selection and extraction of highly discriminative texture attributes combined with various classification, visualization and segmentation tools. Examples of MaZda application in medical studies are also provided.

  10. Visualization of femtosecond laser pulse-induced microincisions inside crystalline lens tissue.

    PubMed

    Stachs, Oliver; Schumacher, Silvia; Hovakimyan, Marine; Fromm, Michael; Heisterkamp, Alexander; Lubatschowski, Holger; Guthoff, Rudolf

    2009-11-01

    To evaluate a new method for visualizing femtosecond laser pulse-induced microincisions inside crystalline lens tissue. Laser Zentrum Hannover e.V., Hannover, Germany. Lenses removed from porcine eyes were modified ex vivo by femtosecond laser pulses (wavelength 1040 nm, pulse duration 306 femtoseconds, pulse energy 1.0 to 2.5 microJ, repetition rate 100 kHz) to create defined planes at which lens fibers separate. The femtosecond laser pulses were delivered by a 3-dimension (3-D) scanning unit and transmitted by focusing optics (numerical aperture 0.18) into the lens tissue. Lens fiber orientation and femtosecond laser-induced microincisions were examined using a confocal laser scanning microscope (CLSM) based on a Rostock Cornea Module attached to a Heidelberg Retina Tomograph II. Optical sections were analyzed in 3-D using Amira software (version 4.1.1). Normal lens fibers showed a parallel pattern with diameters between 3 microm and 9 microm, depending on scanning location. Microincision visualization showed different cutting effects depending on pulse energy of the femtosecond laser. The effects ranged from altered tissue-scattering properties with all fibers intact to definite fiber separation by a wide gap. Pulse energies that were too high or overlapped too tightly produced an incomplete cutting plane due to extensive microbubble generation. The 3-D CLSM method permitted visualization and analysis of femtosecond laser pulse-induced microincisions inside crystalline lens tissue. Thus, 3-D CLSM may help optimize femtosecond laser-based procedures in the treatment of presbyopia.

  11. Pollen structure visualization using high-resolution laboratory-based hard X-ray tomography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Qiong; Gluch, Jürgen; Krüger, Peter

    A laboratory-based X-ray microscope is used to investigate the 3D structure of unstained whole pollen grains. For the first time, high-resolution laboratory-based hard X-ray microscopy is applied to study pollen grains. Based on the efficient acquisition of statistically relevant information-rich images using Zernike phase contrast, both surface- and internal structures of pine pollen - including exine, intine and cellular structures - are clearly visualized. The specific volumes of these structures are calculated from the tomographic data. The systematic three-dimensional study of pollen grains provides morphological and structural information about taxonomic characters that are essential in palynology. Such studies have amore » direct impact on disciplines such as forestry, agriculture, horticulture, plant breeding and biodiversity. - Highlights: • The unstained whole pine pollen was visualized by high-resolution laboratory-based HXRM for the first time. • The comparison study of pollen grains by LM, SEM and high-resolution laboratory-based HXRM. • Phase contrast imaging provides significantly higher contrast of the raw images compared to absorption contrast imaging. • Surface and internal structure of the pine pollen including exine, intine and cellular structures are clearly visualized. • 3D volume data of unstained whole pollen grains are acquired and the specific volumes of the different layer are calculated.« less

  12. Amira: Multi-Dimensional Scientific Visualization for the GeoSciences in the 21st Century

    NASA Astrophysics Data System (ADS)

    Bartsch, H.; Erlebacher, G.

    2003-12-01

    amira (www.amiravis.com) is a general purpose framework for 3D scientific visualization that meets the needs of the non-programmer, the script writer, and the advanced programmer alike. Provided modules may be visually assembled in an interactive manner to create complex visual displays. These modules and their associated user interfaces are controlled either through a mouse, or via an interactive scripting mechanism based on Tcl. We provide interactive demonstrations of the various features of Amira and explain how these may be used to enhance the comprehension of datasets in use in the Earth Sciences community. Its features will be illustrated on scalar and vector fields on grid types ranging from Cartesian to fully unstructured. Specialized extension modules developed by some of our collaborators will be illustrated [1]. These include a module to automatically choose values for salient isosurface identification and extraction, and color maps suitable for volume rendering. During the session, we will present several demonstrations of remote networking, processing of very large spatio-temporal datasets, and various other projects that are underway. In particular, we will demonstrate WEB-IS, a java-applet interface to Amira that allows script editing via the web, and selected data analysis [2]. [1] G. Erlebacher, D. A. Yuen, F. Dubuffet, "Case Study: Visualization and Analysis of High Rayleigh Number -- 3D Convection in the Earth's Mantle", Proceedings of Visualization 2002, pp. 529--532. [2] Y. Wang, G. Erlebacher, Z. A. Garbow, D. A. Yuen, "Web-Based Service of a Visualization Package 'amira' for the Geosciences", Visual Geosciences, 2003.

  13. Segmentation of real-time three-dimensional ultrasound for quantification of ventricular function: a clinical study on right and left ventricles.

    PubMed

    Angelini, Elsa D; Homma, Shunichi; Pearson, Gregory; Holmes, Jeffrey W; Laine, Andrew F

    2005-09-01

    Among screening modalities, echocardiography is the fastest, least expensive and least invasive method for imaging the heart. A new generation of three-dimensional (3-D) ultrasound (US) technology has been developed with real-time 3-D (RT3-D) matrix phased-array transducers. These transducers allow interactive 3-D visualization of cardiac anatomy and fast ventricular volume estimation without tomographic interpolation as required with earlier 3-D US acquisition systems. However, real-time acquisition speed is performed at the cost of decreasing spatial resolution, leading to echocardiographic data with poor definition of anatomical structures and high levels of speckle noise. The poor quality of the US signal has limited the acceptance of RT3-D US technology in clinical practice, despite the wealth of information acquired by this system, far greater than with any other existing echocardiography screening modality. We present, in this work, a clinical study for segmentation of right and left ventricular volumes using RT3-D US. A preprocessing of the volumetric data sets was performed using spatiotemporal brushlet denoising, as presented in previous articles Two deformable-model segmentation methods were implemented in 2-D using a parametric formulation and in 3-D using an implicit formulation with a level set implementation for extraction of endocardial surfaces on denoised RT3-D US data. A complete and rigorous validation of the segmentation methods was carried out for quantification of left and right ventricular volumes and ejection fraction, including comparison of measurements with cardiac magnetic resonance imaging as the reference. Results for volume and ejection fraction measurements report good performance of quantification of cardiac function on RT3-D data compared with magnetic resonance imaging with better performance of semiautomatic segmentation methods than with manual tracing on the US data.

  14. Stereoscopic 3D entertainment and its effect on viewing comfort: comparison of children and adults.

    PubMed

    Pölönen, Monika; Järvenpää, Toni; Bilcu, Beatrice

    2013-01-01

    Children's and adults' viewing comfort during stereoscopic three-dimensional film viewing and computer game playing was studied. Certain mild changes in visual function, heterophoria and near point of accommodation values, as well as eyestrain and visually induced motion sickness levels were found when single setups were compared. The viewing system had an influence on viewing comfort, in particular for eyestrain levels, but no clear difference between two- and three-dimensional systems was found. Additionally, certain mild changes in visual functions and visually induced motion sickness levels between adults and children were found. In general, all of the system-task combinations caused mild eyestrain and possible changes in visual functions, but these changes in magnitude were small. According to subjective opinions that further support these measurements, using a stereoscopic three-dimensional system for up to 2 h was acceptable for most of the users regardless of their age. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  15. National Laboratory for Advanced Scientific Visualization at UNAM - Mexico

    NASA Astrophysics Data System (ADS)

    Manea, Marina; Constantin Manea, Vlad; Varela, Alfredo

    2016-04-01

    In 2015, the National Autonomous University of Mexico (UNAM) joined the family of Universities and Research Centers where advanced visualization and computing plays a key role to promote and advance missions in research, education, community outreach, as well as business-oriented consulting. This initiative provides access to a great variety of advanced hardware and software resources and offers a range of consulting services that spans a variety of areas related to scientific visualization, among which are: neuroanatomy, embryonic development, genome related studies, geosciences, geography, physics and mathematics related disciplines. The National Laboratory for Advanced Scientific Visualization delivers services through three main infrastructure environments: the 3D fully immersive display system Cave, the high resolution parallel visualization system Powerwall, the high resolution spherical displays Earth Simulator. The entire visualization infrastructure is interconnected to a high-performance-computing-cluster (HPCC) called ADA in honor to Ada Lovelace, considered to be the first computer programmer. The Cave is an extra large 3.6m wide room with projected images on the front, left and right, as well as floor walls. Specialized crystal eyes LCD-shutter glasses provide a strong stereo depth perception, and a variety of tracking devices allow software to track the position of a user's hand, head and wand. The Powerwall is designed to bring large amounts of complex data together through parallel computing for team interaction and collaboration. This system is composed by 24 (6x4) high-resolution ultra-thin (2 mm) bezel monitors connected to a high-performance GPU cluster. The Earth Simulator is a large (60") high-resolution spherical display used for global-scale data visualization like geophysical, meteorological, climate and ecology data. The HPCC-ADA, is a 1000+ computing core system, which offers parallel computing resources to applications that requires large quantity of memory as well as large and fast parallel storage systems. The entire system temperature is controlled by an energy and space efficient cooling solution, based on large rear door liquid cooled heat exchangers. This state-of-the-art infrastructure will boost research activities in the region, offer a powerful scientific tool for teaching at undergraduate and graduate levels, and enhance association and cooperation with business-oriented organizations.

  16. Tangible Models and Haptic Representations Aid Learning of Molecular Biology Concepts

    ERIC Educational Resources Information Center

    Johannes, Kristen; Powers, Jacklyn; Couper, Lisa; Silberglitt, Matt; Davenport, Jodi

    2016-01-01

    Can novel 3D models help students develop a deeper understanding of core concepts in molecular biology? We adapted 3D molecular models, developed by scientists, for use in high school science classrooms. The models accurately represent the structural and functional properties of complex DNA and Virus molecules, and provide visual and haptic…

  17. Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    available SfM application known as VisualSFM .6,7 VisualSFM is an end-user, “off-the-shelf” implementation of SfM that is easy to configure and used for...most 3D model generation applications from imagery. While the usual interface with VisualSFM is through their graphical user interface (GUI), we will be...of our system.5 There are two types of 3D model generation available within VisualSFM ; sparse and dense reconstruction. Sparse reconstruction begins

  18. Mobile Virtual Reality : A Solution for Big Data Visualization

    NASA Astrophysics Data System (ADS)

    Marshall, E.; Seichter, N. D.; D'sa, A.; Werner, L. A.; Yuen, D. A.

    2015-12-01

    Pursuits in geological sciences and other branches of quantitative sciences often require data visualization frameworks that are in continual need of improvement and new ideas. Virtual reality is a medium of visualization that has large audiences originally designed for gaming purposes; Virtual reality can be captured in Cave-like environment but they are unwieldy and expensive to maintain. Recent efforts by major companies such as Facebook have focussed more on a large market , The Oculus is the first of such kind of mobile devices The operating system Unity makes it possible for us to convert the data files into a mesh of isosurfaces and be rendered into 3D. A user is immersed inside of the virtual reality and is able to move within and around the data using arrow keys and other steering devices, similar to those employed in XBox.. With introductions of products like the Oculus Rift and Holo Lens combined with ever increasing mobile computing strength, mobile virtual reality data visualization can be implemented for better analysis of 3D geological and mineralogical data sets. As more new products like the Surface Pro 4 and other high power yet very mobile computers are introduced to the market, the RAM and graphics card capacity necessary to run these models is more available, opening doors to this new reality. The computing requirements needed to run these models are a mere 8 GB of RAM and 2 GHz of CPU speed, which many mobile computers are starting to exceed. Using Unity 3D software to create a virtual environment containing a visual representation of the data, any data set converted into FBX or OBJ format which can be traversed by wearing the Oculus Rift device. This new method for analysis in conjunction with 3D scanning has potential applications in many fields, including the analysis of precious stones or jewelry. Using hologram technology to capture in high-resolution the 3D shape, color, and imperfections of minerals and stones, detailed review and analysis of the stone can be done remotely without ever seeing the real thing. This strategy can be game-changer for shoppers without having to go to the store.

  19. ePlant: Visualizing and Exploring Multiple Levels of Data for Hypothesis Generation in Plant Biology.

    PubMed

    Waese, Jamie; Fan, Jim; Pasha, Asher; Yu, Hans; Fucile, Geoffrey; Shi, Ruian; Cumming, Matthew; Kelley, Lawrence A; Sternberg, Michael J; Krishnakumar, Vivek; Ferlanti, Erik; Miller, Jason; Town, Chris; Stuerzlinger, Wolfgang; Provart, Nicholas J

    2017-08-01

    A big challenge in current systems biology research arises when different types of data must be accessed from separate sources and visualized using separate tools. The high cognitive load required to navigate such a workflow is detrimental to hypothesis generation. Accordingly, there is a need for a robust research platform that incorporates all data and provides integrated search, analysis, and visualization features through a single portal. Here, we present ePlant (http://bar.utoronto.ca/eplant), a visual analytic tool for exploring multiple levels of Arabidopsis thaliana data through a zoomable user interface. ePlant connects to several publicly available web services to download genome, proteome, interactome, transcriptome, and 3D molecular structure data for one or more genes or gene products of interest. Data are displayed with a set of visualization tools that are presented using a conceptual hierarchy from big to small, and many of the tools combine information from more than one data type. We describe the development of ePlant in this article and present several examples illustrating its integrative features for hypothesis generation. We also describe the process of deploying ePlant as an "app" on Araport. Building on readily available web services, the code for ePlant is freely available for any other biological species research. © 2017 American Society of Plant Biologists. All rights reserved.

  20. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  1. High-resolution T1-weighted 3D real IR imaging of the temporal bone using triple-dose contrast material.

    PubMed

    Naganawa, Shinji; Koshikawa, Tokiko; Nakamura, Tatsuya; Fukatsu, Hiroshi; Ishigaki, Takeo; Aoki, Ikuo

    2003-12-01

    The small structures in the temporal bone are surrounded by bone and air. The objectives of this study were (a) to compare contrast-enhanced T1-weighted images acquired by fast spin-echo-based three-dimensional real inversion recovery (3D rIR) against those acquired by gradient echo-based 3D SPGR in the visualization of the enhancement of small structures in the temporal bone, and (b) to determine whether either 3D rIR or 3D SPGR is useful for visualizing enhancement of the cochlear lymph fluid. Seven healthy men (age range 27-46 years) volunteered to participate in this study. All MR imaging was performed using a dedicated bilateral quadrature surface phased-array coil for temporal bone imaging at 1.5 T (Visart EX, Toshiba, Tokyo, Japan). The 3D rIR images (TR/TE/TI: 1800 ms/10 ms/500 ms) and flow-compensated 3D SPGR images (TR/TE/FA: 23 ms/10 ms/25 degrees) were obtained with a reconstructed voxel size of 0.6 x 0.7 x 0.8 mm3. Images were acquired before and 1, 90, 180, and 270 min after the administration of triple-dose Gd-DTPA-BMA (0.3 mmol/kg). In post-contrast MR images, the degree of enhancement of the cochlear aqueduct, endolymphatic sac, subarcuate artery, geniculate ganglion of the facial nerve, and cochlear lymph fluid space was assessed by two radiologists. The degree of enhancement was scored as follows: 0 (no enhancement); 1 (slight enhancement); 2 (intermediate between 1 and 3); and 3 (enhancement similar to that of vessels). Enhancement scores for the endolymphatic sac, subarcuate artery, and geniculate ganglion were higher in 3D rIR than in 3D SPGR. Washout of enhancement in the endolymphatic sac appeared to be delayed compared with that in the subarcuate artery, suggesting that the enhancement in the endolymphatic sac may have been due in part to non-vascular tissue enhancement. Enhancement of the cochlear lymph space was not observed in any of the subjects in 3D rIR and 3D SPGR. The 3D rIR sequence may be more sensitive than the 3D SPGR sequence in visualizing the enhancement of small structures in the temporal bone; however, enhancement of the cochlear fluid space could not be visualized even with 3D rIR, triple-dose contrast, and dedicated coils at 1.5 T.

  2. Sensor-Based Electromagnetic Navigation (Mediguide®): How Accurate Is It? A Phantom Model Study.

    PubMed

    Bourier, Felix; Reents, Tilko; Ammar-Busch, Sonia; Buiatti, Alessandra; Grebmer, Christian; Telishevska, Marta; Brkic, Amir; Semmler, Verena; Lennerz, Carsten; Kaess, Bernhard; Kottmaier, Marc; Kolb, Christof; Deisenhofer, Isabel; Hessling, Gabriele

    2015-10-01

    Data about localization reproducibility as well as spatial and visual accuracy of the new MediGuide® sensor-based electroanatomic navigation technology are scarce. We therefore sought to quantify these parameters based on phantom experiments. A realistic heart phantom was generated in a 3D-Printer. A CT scan was performed on the phantom. The phantom itself served as ground-truth reference to ensure exact and reproducible catheter placement. A MediGuide® catheter was repeatedly tagged at selected positions to assess accuracy of point localization. The catheter was also used to acquire a MediGuide®-scaled geometry in the EnSite Velocity® electroanatomic mapping system. The acquired geometries (MediGuide®-scaled and EnSite Velocity®-scaled) were compared to a CT segmentation of the phantom to quantify concordance. Distances between landmarks were measured in the EnSite Velocity®- and MediGuide®-scaled geometry and the CT dataset for Bland-Altman comparison. The visualization of virtual MediGuide® catheter tips was compared to their corresponding representation on fluoroscopic cine-loops. Point localization accuracy was 0.5 ± 0.3 mm for MediGuide® and 1.4 ± 0.7 mm for EnSite Velocity®. The 3D accuracy of the geometries was 1.1 ± 1.4 mm (MediGuide®-scaled) and 3.2 ± 1.6 mm (not MediGuide®-scaled). The offset between virtual MediGuide® catheter visualization and catheter representation on corresponding fluoroscopic cine-loops was 0.4 ± 0.1 mm. The MediGuide® system shows a very high level of accuracy regarding localization reproducibility as well as spatial and visual accuracy, which can be ascribed to the magnetic field localization technology. The observed offsets between the geometry visualization and the real phantom are below a clinically relevant threshold. © 2015 Wiley Periodicals, Inc.

  3. Visual Performance of a Quadrifocal (Trifocal) Intraocular Lens Following Removal of the Crystalline Lens.

    PubMed

    Kohnen, Thomas; Herzog, Michael; Hemkeppler, Eva; Schönbrunn, Sabrina; De Lorenzo, Nina; Petermann, Kerstin; Böhm, Myriam

    2017-12-01

    To evaluate visual performance after implantation of a quadrifocal intraocular lens (IOL). Setting: Department of Ophthalmology, Goethe University, Frankfurt, Germany. Twenty-seven patients (54 eyes) received bilateral implantation of the PanOptix IOL (AcrySof IQ PanOptixTM; Alcon Research, Fort Worth, Texas, USA) pre-enrollment. Exclusion criteria were previous ocular surgeries, corneal astigmatism of >1.5 diopter (D), ocular pathologies, or corneal abnormalities. Intervention or Observational Procedure(s): Postoperative examination at 3 months including manifest refraction; uncorrected visual acuity (UCVA) and distance-corrected visual acuity (DCVA) in 4 m, 80 cm, 60 cm, and 40 cm slit-lamp examination; defocus testing; contrast sensitivity (CS) under photopic and mesopic conditions; and a questionnaire on subjective quality of vision, optical phenomena, and spectacle independence was performed. At 3 months postoperatively, UCVA and DCVA in 4 m, 80 cm, 60 cm, and 40 cm (logMAR), defocus curves, CS, and quality-of-vision questionnaire results. Mean spherical equivalent was -0.04 ± 0.321 D 3 months postoperatively. Binocular UCVA at distance, intermediate (80 cm, 60 cm), and near was 0.00 ± 0.094 logMAR, 0.09 ± 0.107 logMAR, 0.00 ± 0.111 logMAR, and 0.01 ± 0.087 logMAR, respectively. Binocular defocus curve showed peaks with best visual acuity (VA) at 0.00 D (-0.07 logMAR) and -2.00 D (-0.02 logMAR). Visual performance of the PanOptix IOL showed good VA at all distances; particularly good intermediate VA (logMAR > 0.1), with best VA at 60 cm; and high patient satisfaction and spectacle independence 3 months postoperatively. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  5. Preoperative simulation for the planning of microsurgical clipping of intracranial aneurysms.

    PubMed

    Marinho, Paulo; Vermandel, Maximilien; Bourgeois, Philippe; Lejeune, Jean-Paul; Mordon, Serge; Thines, Laurent

    2014-12-01

    The safety and success of intracranial aneurysm (IA) surgery could be improved through the dedicated application of simulation covering the procedure from the 3-dimensional (3D) description of the surgical scene to the visual representation of the clip application. We aimed in this study to validate the technical feasibility and clinical relevance of such a protocol. All patients preoperatively underwent 3D magnetic resonance imaging and 3D computed tomography angiography to build 3D reconstructions of the brain, cerebral arteries, and surrounding cranial bone. These 3D models were segmented and merged using Osirix, a DICOM image processing application. This provided the surgical scene that was subsequently imported into Blender, a modeling platform for 3D animation. Digitized clips and appliers could then be manipulated in the virtual operative environment, allowing the visual simulation of clipping. This simulation protocol was assessed in a series of 10 IAs by 2 neurosurgeons. The protocol was feasible in all patients. The visual similarity between the surgical scene and the operative view was excellent in 100% of the cases, and the identification of the vascular structures was accurate in 90% of the cases. The neurosurgeons found the simulation helpful for planning the surgical approach (ie, the bone flap, cisternal opening, and arterial tree exposure) in 100% of the cases. The correct number of final clip(s) needed was predicted from the simulation in 90% of the cases. The preoperatively expected characteristics of the optimal clip(s) (ie, their number, shape, size, and orientation) were validated during surgery in 80% of the cases. This study confirmed that visual simulation of IA clipping based on the processing of high-resolution 3D imaging can be effective. This is a new and important step toward the development of a more sophisticated integrated simulation platform dedicated to cerebrovascular surgery.

  6. A two-level generative model for cloth representation and shape from shading.

    PubMed

    Han, Feng; Zhu, Song-Chun

    2007-07-01

    In this paper, we present a two-level generative model for representing the images and surface depth maps of drapery and clothes. The upper level consists of a number of folds which will generate the high contrast (ridge) areas with a dictionary of shading primitives (for 2D images) and fold primitives (for 3D depth maps). These primitives are represented in parametric forms and are learned in a supervised learning phase using 3D surfaces of clothes acquired through photometric stereo. The lower level consists of the remaining flat areas which fill between the folds with a smoothness prior (Markov random field). We show that the classical ill-posed problem-shape from shading (SFS) can be much improved by this two-level model for its reduced dimensionality and incorporation of middle-level visual knowledge, i.e., the dictionary of primitives. Given an input image, we first infer the folds and compute a sketch graph using a sketch pursuit algorithm as in the primal sketch [10], [11]. The 3D folds are estimated by parameter fitting using the fold dictionary and they form the "skeleton" of the drapery/cloth surfaces. Then, the lower level is computed by conventional SFS method using the fold areas as boundary conditions. The two levels interact at the final stage by optimizing a joint Bayesian posterior probability on the depth map. We show a number of experiments which demonstrate more robust results in comparison with state-of-the-art work. In a broader scope, our representation can be viewed as a two-level inhomogeneous MRF model which is applicable to general shape-from-X problems. Our study is an attempt to revisit Marr's idea [23] of computing the 2(1/2)D sketch from primal sketch. In a companion paper [2], we study shape from stereo based on a similar two-level generative sketch representation.

  7. Femtosecond (FS) laser vision correction procedure for moderate to high myopia: a prospective study of ReLEx(®) flex and comparison with a retrospective study of FS-laser in situ keratomileusis.

    PubMed

    Vestergaard, Anders; Ivarsen, Anders; Asp, Sven; Hjortdal, Jesper Ø

    2013-06-01

    To present our initial clinical experience with ReLEx(®) flex (ReLEx) for moderate to high myopia. We compare efficacy, safety and corneal higher-order aberrations after ReLEx with femtosecond laser in situ keratomileusis (FS-LASIK). Prospective study of ReLEx compared with a retrospective study of FS-LASIK. ReLEx is a new keratorefractive procedure, where a stromal lenticule is cut by a femtosecond laser and manually extracted. Forty patients were treated with ReLEx on both eyes. A comparable group of 41 FS-LASIK patients were retrospectively identified. Visual acuity, spherical equivalent (SE) and corneal tomography were measured before and 3 months after surgery. Preoperative SE averaged -7.50 ± 1.16 D (ReLEx) and -7.32 ± 1.09 D (FS-LASIK). For all eyes, mean corrected distance visual acuity remained unchanged in both groups. For eyes with emmetropia as target refraction, 41% of ReLEx and 61% of FS-LASIK eyes had an uncorrected distance visual acuity of logMAR ≤ 0.10 at day 1 after surgery, increasing to, respectively, 88% and 69% at 3 months. Mean SE was -0.06 ± 0.35 D 3 months after ReLEx and -0.53 ± 0.60 D after FS-LASIK. The proportion of eyes within ±1.00 D after 3 months was 100% (ReLEx) and 85% (FS-LASIK). For a 6.0-mm pupil, corneal spherical aberrations increased significantly less in ReLEx than FS-LASIK eyes. ReLEx is an all-in-one femtosecond laser refractive procedure, and in this study, results were comparable to FS-LASIK. Refractive predictability and corneal aberrations at 3 months seemed better than or equal to FS-LASIK, whereas visual recovery after ReLEx was slower. © 2012 The Authors. Acta Ophthalmologica © 2012 Acta Ophthalmologica Scandinavica Foundation.

  8. Relationship of Transportation Noise and Annoyance for Two Metropolitan Cities in Korea: Population Based Study

    PubMed Central

    Sung, Joo Hyun; Lee, Jiho; Park, Sang Jin; Sim, Chang Sun

    2016-01-01

    Transportation noise is known to have negative impact on both public health and life quality. This study evaluated the relationship between transportation noise and annoyance levels, and also the difference of annoyance levels in two metropolitan cities based on epidemiologic surveys. Two thousand adult subjects living in Seoul and Ulsan were enrolled by stratified random sampling on the basis of noise maps from July 2015 to January 2016. Individual annoyance in accordance with transportation noise levels in two metropolitan cities were surveyed using an 11-point visual analog scale questionnaire. The results show that transportation noise level was significantly correlated with annoyance in both cities. Logistic regression analysis revealed that the risk of being ‘highly annoyed’ increased with noise level (Ldn, day-night average sound level) in both cities. After adjusting for age, residence period, sociodemographic factors (sex, education, marriage, income, alcohol, smoking, and exercise) and noise sensitivity, the risk of being ‘highly annoyed’ was increased with noise levels in both cities. In comparison to those of areas with noise levels below 55 dBA, the adjusted odds ratios of ‘highly annoyed’ for areas with 55–65 dBA and over 65 dBA were 2.056 (95% confidence interval [CI] 1.225–3.450), 3.519 (95% CI 1.982–6.246) in Seoul and 1.022 (95% CI 0.585–1.785), 1.704 (95% CI 1.005–2.889) in Ulsan, respectively. Based on the results of a population study, we showed that transportation noise levels were significantly associated with annoyance in adults. However, there were some differences between the two cities. In this study, there were differences in transportation noise between the two cities. Seoul has complex noise (traffic and aircraft), compared to single road traffic noise in Ulsan. Therefore, single and complex transportation noise may have different effects on annoyance levels. PMID:28005976

  9. Applications of Java and Vector Graphics to Astrophysical Visualization

    NASA Astrophysics Data System (ADS)

    Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.

    2002-12-01

    We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.

  10. 3D Visualization for Planetary Missions

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  11. 3D PATTERN OF BRAIN ABNORMALITIES IN FRAGILE X SYNDROME VISUALIZED USING TENSOR-BASED MORPHOMETRY

    PubMed Central

    Lee, Agatha D.; Leow, Alex D.; Lu, Allen; Reiss, Allan L.; Hall, Scott; Chiang, Ming-Chang; Toga, Arthur W.; Thompson, Paul M.

    2007-01-01

    Fragile X syndrome (FraX), a genetic neurodevelopmental disorder, results in impaired cognition with particular deficits in executive function and visuo-spatial skills. Here we report the first detailed 3D maps of the effects of the Fragile X mutation on brain structure, using tensor-based morphometry. TBM visualizes structural brain deficits automatically, without time-consuming specification of regions-of-interest. We compared 36 subjects with FraX (age: 14.66+/−1.58SD, 18 females/18 males), and 33 age-matched healthy controls (age: 14.67+/−2.2SD, 17 females/16 males), using high-dimensional elastic image registration. All 69 subjects' 3D T1-weighted brain MRIs were spatially deformed to match a high-resolution single-subject average MRI scan in ICBM space, whose geometry was optimized to produce a minimal deformation target. Maps of the local Jacobian determinant (expansion factor) were computed from the deformation fields. Statistical maps showed increased caudate (10% higher; p=0.001) and lateral ventricle volumes (19% higher; p=0.003), and trend-level parietal and temporal white matter excesses (10% higher locally; p=0.04). In affected females, volume abnormalities correlated with reduction in systemically measured levels of the fragile X mental retardation protein (FMRP; Spearman's r<−0.5 locally). Decreased FMRP correlated with ventricular expansion (p=0.042; permutation test), and anterior cingulate tissue reductions (p=0.0026; permutation test) supporting theories that FMRP is required for normal dendritic pruning in fronto-striatal-limbic pathways. No sex differences were found; findings were confirmed using traditional volumetric measures in regions of interest. Deficit patterns were replicated using Lie group statistics optimized for tensor-valued data. Investigation of how these anomalies emerge over time will accelerate our understanding of FraX and its treatment. PMID:17161622

  12. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  13. Sensor-enhanced 3D conformal cueing for safe and reliable HC operation in DVE in all flight phases

    NASA Astrophysics Data System (ADS)

    Münsterer, Thomas; Schafhitzel, Tobias; Strobel, Michael; Völschow, Philipp; Klasen, Stephanus; Eisenkeil, Ferdinand

    2014-06-01

    Low level helicopter operations in Degraded Visual Environment (DVE) still are a major challenge and bear the risk of potentially fatal accidents. DVE generally encompasses all degradations to the visual perception of the pilot ranging from night conditions via rain and snowfall to fog and maybe even blinding sunlight or unstructured outside scenery. Each of these conditions reduce the pilots' ability to perceive visual cues in the outside world reducing his performance and finally increasing risk of mission failure and accidents, like for example Controlled Flight Into Terrain (CFIT). The basis for the presented solution is a fusion of processed and classified high resolution ladar data with database information having a potential to also include other sensor data like forward looking or 360° radar data. This paper reports on a pilot assistance system aiming at giving back the essential visual cues to the pilot by means of displaying 3D conformal cues and symbols in a head-tracked Helmet Mounted Display (HMD) and a combination with synthetic view on a head-down Multi-Function Display (MFD). Each flight phase and each flight envelope requires different symbology sets and different possibilities for the pilots to select specific support functions. Several functionalities have been implemented and tested in a simulator as well as in flight. The symbology ranges from obstacle warning symbology via terrain enhancements through grids or ridge lines to different waypoint symbols supporting navigation. While some adaptations can be automated it emerged as essential that symbology characteristics and completeness can be selected by the pilot to match the relevant flight envelope and outside visual conditions.

  14. Virtual reality and 3D visualizations in heart surgery education.

    PubMed

    Friedl, Reinhard; Preisack, Melitta B; Klas, Wolfgang; Rose, Thomas; Stracke, Sylvia; Quast, Klaus J; Hannekum, Andreas; Gödje, Oliver

    2002-01-01

    Computer assisted teaching plays an increasing role in surgical education. The presented paper describes the development of virtual reality (VR) and 3D visualizations for educational purposes concerning aortocoronary bypass grafting and their prototypical implementation into a database-driven and internet-based educational system in heart surgery. A multimedia storyboard has been written and digital video has been encoded. Understanding of these videos was not always satisfying; therefore, additional 3D and VR visualizations have been modelled as VRML, QuickTime, QuickTime Virtual Reality and MPEG-1 applications. An authoring process in terms of integration and orchestration of different multimedia components to educational units has been started. A virtual model of the heart has been designed. It is highly interactive and the user is able to rotate it, move it, zoom in for details or even fly through. It can be explored during the cardiac cycle and a transparency mode demonstrates coronary arteries, movement of the heart valves, and simultaneous blood-flow. Myocardial ischemia and the effect of an IMA-Graft on myocardial perfusion is simulated. Coronary artery stenoses and bypass-grafts can be interactively added. 3D models of anastomotique techniques and closed thrombendarterectomy have been developed. Different visualizations have been prototypically implemented into a teaching application about operative techniques. Interactive virtual reality and 3D teaching applications can be used and distributed via the World Wide Web and have the power to describe surgical anatomy and principles of surgical techniques, where temporal and spatial events play an important role, in a way superior to traditional teaching methods.

  15. Myopic Maculopathy and Optic Disc Changes in Highly Myopic Young Asian Eyes and Impact on Visual Acuity.

    PubMed

    Koh, Victor; Tan, Colin; Tan, Pei Ting; Tan, Marcus; Balla, Vinay; Nah, Gerard; Cheng, Ching-Yu; Ohno-Matsui, Kyoko; Tan, Mellisa M H; Yang, Adeline; Zhao, Paul; Wong, Tien Yin; Saw, Seang-Mei

    2016-04-01

    To determine the prevalence and risk factors of myopic maculopathy and specific optic disc and macular changes in highly myopic eyes of young Asian adults and their impact on visual acuity. Prospective cross-sectional study. In total, 593 highly myopic (spherical equivalent refraction [SER] less than -6.00 diopters [D]) and 156 emmetropic (SER between -1.00 and +1.00 D) male participants from a population-based survey were included. All participants underwent standardized medical interviews, ophthalmic examination, and color fundus photographs. These photographs were graded systematically to determine the presence of optic disc and macular lesions. Myopic maculopathy was classified based on the International Classification of Myopic Maculopathy. The mean age was 21.1 ± 1.2 years. The mean SER for the highly myopic and emmetropic group was -8.87 ± 2.11 D and 0.40 ± 0.39 D, respectively (P < .001). Compared to emmetropic eyes, highly myopic eyes were significantly more likely to have optic disc tilt, peripapillary atrophy (PPA), posterior staphyloma, chorioretinal atrophy, and myopic maculopathy (all P < .001). The main findings included PPA (98.3%), disc tilt (22.0%), posterior staphyloma (32.0%), and chorioretinal atrophy (8.3%). Myopic maculopathy was present in 8.3% of highly myopic eyes and was associated with older age (odds ratio [OR] 1.66; 95% CI: 1.22, 2.26), reduced choroidal thickness (OR 0.99; 95% CI: 0.98, 0.99), and increased axial length (AL) (OR 1.52; 95% CI: 1.06, 2.19). The presence of disc tilt, posterior staphyloma, and chorioretinal atrophy were associated with reduced visual acuity. Our study showed that myopia-related changes of the optic disc and macula were common in highly myopic eyes even at a young age. The risk factors for myopic maculopathy include increased age, longer AL, and reduced choroidal thickness. Some of these changes were associated with reduced central visual function. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  17. Figure-ground organization and the emergence of proto-objects in the visual cortex.

    PubMed

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a 'figure' relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations ('proto-objects'). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex.

  18. Figure–ground organization and the emergence of proto-objects in the visual cortex

    PubMed Central

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields (CRF), but in addition their responses are modulated (enhanced or suppressed) depending on the location of a ‘figure’ relative to the edge in their receptive field. Each neuron has a fixed preference for location on one side or the other. This selectivity is derived from the image context far beyond the CRF. This paper reviews evidence indicating that border ownership selectivity reflects the formation of early object representations (‘proto-objects’). The evidence includes experiments showing (1) reversal of border ownership signals with change of perceived object structure, (2) border ownership specific enhancement of responses in object-based selective attention, (3) persistence of border ownership signals in accordance with continuity of object perception, and (4) remapping of border ownership signals across saccades and object movements. Findings 1 and 2 can be explained by hypothetical grouping circuits that sum contour feature signals in search of objectness, and, via recurrent projections, enhance the corresponding low-level feature signals. Findings 3 and 4 might be explained by assuming that the activity of grouping circuits persists and can be remapped. Grouping, persistence, and remapping are fundamental operations of vision. Finding these operations manifest in low-level visual areas challenges traditional views of visual processing. New computational models need to be developed for a comprehensive understanding of the function of the visual cortex. PMID:26579062

  19. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  20. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair.

    PubMed

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y K

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed.

  1. High-Performance Agent-Based Modeling Applied to Vocal Fold Inflammation and Repair

    PubMed Central

    Seekhao, Nuttiiya; Shung, Caroline; JaJa, Joseph; Mongeau, Luc; Li-Jessen, Nicole Y. K.

    2018-01-01

    Fast and accurate computational biology models offer the prospect of accelerating the development of personalized medicine. A tool capable of estimating treatment success can help prevent unnecessary and costly treatments and potential harmful side effects. A novel high-performance Agent-Based Model (ABM) was adopted to simulate and visualize multi-scale complex biological processes arising in vocal fold inflammation and repair. The computational scheme was designed to organize the 3D ABM sub-tasks to fully utilize the resources available on current heterogeneous platforms consisting of multi-core CPUs and many-core GPUs. Subtasks are further parallelized and convolution-based diffusion is used to enhance the performance of the ABM simulation. The scheme was implemented using a client-server protocol allowing the results of each iteration to be analyzed and visualized on the server (i.e., in-situ) while the simulation is running on the same server. The resulting simulation and visualization software enables users to interact with and steer the course of the simulation in real-time as needed. This high-resolution 3D ABM framework was used for a case study of surgical vocal fold injury and repair. The new framework is capable of completing the simulation, visualization and remote result delivery in under 7 s per iteration, where each iteration of the simulation represents 30 min in the real world. The case study model was simulated at the physiological scale of a human vocal fold. This simulation tracks 17 million biological cells as well as a total of 1.7 billion signaling chemical and structural protein data points. The visualization component processes and renders all simulated biological cells and 154 million signaling chemical data points. The proposed high-performance 3D ABM was verified through comparisons with empirical vocal fold data. Representative trends of biomarker predictions in surgically injured vocal folds were observed. PMID:29706894

  2. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Intuitive presentation of clinical forensic data using anonymous and person-specific 3D reference manikins.

    PubMed

    Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva

    2014-08-01

    The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. Comparison of Toric Foldable Iris-Fixated Phakic Intraocular Lens Implantation and Limbal Relaxing Incisions for Moderate-to-High Myopic Astigmatism

    PubMed Central

    Lee, Jeihoon; Lee, Hun; Kang, David Sung Yong; Choi, Jin Young; Kim, Eung Kweon

    2016-01-01

    Purpose To compare the effectiveness of toric foldable iris-fixated phakic intraocular lens (pIOL) implantation and non-toric foldable iris-fixated pIOL implantation with limbal relaxing incisions (LRIs) for correcting moderate-to-high astigmatism in myopic eyes. Materials and Methods The medical records of 146 patients (195 eyes) with myopic astigmatism who underwent toric foldable iris-fixated pIOL implantation (toric group; 94 eyes) or non-toric foldable iris-fixated pIOL implantation with concurrent LRIs (LRI group; 101 eyes) were retrospectively reviewed. For subgroup analysis, the two groups were subdivided according to preoperative astigmatic severity [moderate, 2.00 to <3.00 diopters (D); high, 3.00–4.00 D]. Visual and astigmatic outcomes were compared 6 months postoperatively. Results The uncorrected distance visual acuity was at least 20/25 in 100% and 98% of the toric and LRI group eyes, respectively. The toric group had lower mean residual cylindrical error (-0.67±0.39 D vs. -1.14±0.56 D; p<0.001) and greater mean cylindrical error change (2.17±0.56 D vs. 1.63±0.72 D; p<0.001) than the LRI group, regardless of the preoperative astigmatic severity. The mean correction index (1.10±0.16 vs. 0.72±0.24; p<0.001) and success index (0.24±0.14 vs. 0.42±0.21; p<0.001) also differed significantly between the groups. Conclusion Both surgical techniques considerably reduced astigmatism and had comparable visual outcomes. However, toric foldable iris-fixated pIOL implantation was more reliable for correcting moderate-to-high astigmatism in myopic eyes. PMID:27593877

  5. Comparison of Toric Foldable Iris-Fixated Phakic Intraocular Lens Implantation and Limbal Relaxing Incisions for Moderate-to-High Myopic Astigmatism.

    PubMed

    Lee, Jeihoon; Lee, Hun; Kang, David Sung Yong; Choi, Jin Young; Kim, Eung Kweon; Kim, Tae Im

    2016-11-01

    To compare the effectiveness of toric foldable iris-fixated phakic intraocular lens (pIOL) implantation and non-toric foldable iris-fixated pIOL implantation with limbal relaxing incisions (LRIs) for correcting moderate-to-high astigmatism in myopic eyes. The medical records of 146 patients (195 eyes) with myopic astigmatism who underwent toric foldable iris-fixated pIOL implantation (toric group; 94 eyes) or non-toric foldable iris-fixated pIOL implantation with concurrent LRIs (LRI group; 101 eyes) were retrospectively reviewed. For subgroup analysis, the two groups were subdivided according to preoperative astigmatic severity [moderate, 2.00 to <3.00 diopters (D); high, 3.00-4.00 D]. Visual and astigmatic outcomes were compared 6 months postoperatively. The uncorrected distance visual acuity was at least 20/25 in 100% and 98% of the toric and LRI group eyes, respectively. The toric group had lower mean residual cylindrical error (-0.67±0.39 D vs. -1.14±0.56 D; p<0.001) and greater mean cylindrical error change (2.17±0.56 D vs. 1.63±0.72 D; p<0.001) than the LRI group, regardless of the preoperative astigmatic severity. The mean correction index (1.10±0.16 vs. 0.72±0.24; p<0.001) and success index (0.24±0.14 vs. 0.42±0.21; p<0.001) also differed significantly between the groups. Both surgical techniques considerably reduced astigmatism and had comparable visual outcomes. However, toric foldable iris-fixated pIOL implantation was more reliable for correcting moderate-to-high astigmatism in myopic eyes.

  6. Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets.

    PubMed

    Mahr, Angela; Wentura, Dirk

    2014-02-01

    Findings from three experiments support the conclusion that auditory primes facilitate the processing of related targets. In Experiments 1 and 2, we employed a crossmodal Stroop color identification task with auditory color words (as primes) and visual color patches (as targets). Responses were faster for congruent priming, in comparison to neutral or incongruent priming. This effect also emerged for different levels of time compression of the auditory primes (to 30 % and 10 % of the original length; i.e., 120 and 40 ms) and turned out to be even more pronounced under high-perceptual-load conditions (Exps. 1 and 2). In Experiment 3, target-present or -absent decisions for brief target displays had to be made, thereby ruling out response-priming processes as a cause of the congruency effects. Nevertheless, target detection (d') was increased by congruent primes (30 % compression) in comparison to incongruent or neutral primes. Our results suggest semantic object-based auditory-visual interactions, which rapidly increase the denoted target object's salience. This would apply, in particular, to complex visual scenes.

  7. Spatial 3D infrastructure: display-independent software framework, high-speed rendering electronics, and several new displays

    NASA Astrophysics Data System (ADS)

    Chun, Won-Suk; Napoli, Joshua; Cossairt, Oliver S.; Dorval, Rick K.; Hall, Deirdre M.; Purtell, Thomas J., II; Schooler, James F.; Banker, Yigal; Favalora, Gregg E.

    2005-03-01

    We present a software and hardware foundation to enable the rapid adoption of 3-D displays. Different 3-D displays - such as multiplanar, multiview, and electroholographic displays - naturally require different rendering methods. The adoption of these displays in the marketplace will be accelerated by a common software framework. The authors designed the SpatialGL API, a new rendering framework that unifies these display methods under one interface. SpatialGL enables complementary visualization assets to coexist through a uniform infrastructure. Also, SpatialGL supports legacy interfaces such as the OpenGL API. The authors" first implementation of SpatialGL uses multiview and multislice rendering algorithms to exploit the performance of modern graphics processing units (GPUs) to enable real-time visualization of 3-D graphics from medical imaging, oil & gas exploration, and homeland security. At the time of writing, SpatialGL runs on COTS workstations (both Windows and Linux) and on Actuality"s high-performance embedded computational engine that couples an NVIDIA GeForce 6800 Ultra GPU, an AMD Athlon 64 processor, and a proprietary, high-speed, programmable volumetric frame buffer that interfaces to a 1024 x 768 x 3 digital projector. Progress is illustrated using an off-the-shelf multiview display, Actuality"s multiplanar Perspecta Spatial 3D System, and an experimental multiview display. The experimental display is a quasi-holographic view-sequential system that generates aerial imagery measuring 30 mm x 25 mm x 25 mm, providing 198 horizontal views.

  8. Decoding the cortical transformations for visually guided reaching in 3D space.

    PubMed

    Blohm, Gunnar; Keith, Gerald P; Crawford, J Douglas

    2009-06-01

    To explore the possible cortical mechanisms underlying the 3-dimensional (3D) visuomotor transformation for reaching, we trained a 4-layer feed-forward artificial neural network to compute a reach vector (output) from the visual positions of both the hand and target viewed from different eye and head orientations (inputs). The emergent properties of the intermediate layers reflected several known neurophysiological findings, for example, gain field-like modulations and position-dependent shifting of receptive fields (RFs). We performed a reference frame analysis for each individual network unit, simulating standard electrophysiological experiments, that is, RF mapping (unit input), motor field mapping, and microstimulation effects (unit outputs). At the level of individual units (in both intermediate layers), the 3 different electrophysiological approaches identified different reference frames, demonstrating that these techniques reveal different neuronal properties and suggesting that a comparison across these techniques is required to understand the neural code of physiological networks. This analysis showed fixed input-output relationships within each layer and, more importantly, within each unit. These local reference frame transformation modules provide the basic elements for the global transformation; their parallel contributions are combined in a gain field-like fashion at the population level to implement both the linear and nonlinear elements of the 3D visuomotor transformation.

  9. A conjugated microporous polymer based visual sensing platform for aminoglycoside antibiotics in water.

    PubMed

    Bhunia, Subhajit; Dey, Nilanjan; Pradhan, Anirban; Bhattacharya, Santanu

    2018-06-20

    A donor-acceptor based conjugated microporous polymer, PER@NiP-CMOP-1, has been synthesized which can achieve highly sensitive stereo-specific "Turn ON" biosensing of an aminoglycoside up to the ppb level. The coordination-driven inhibition of photo-induced electron transfer (d-PET) for d-electrons and the rotational freezing are the key factors for the recovery of the emission.

  10. User Centered, Application Independent Visualization of National Airspace Data

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Hinton, Susan E.

    2011-01-01

    This paper describes an application independent software tool, IV4D, built to visualize animated and still 3D National Airspace System (NAS) data specifically for aeronautics engineers who research aggregate, as well as single, flight efficiencies and behavior. IV4D was origin ally developed in a joint effort between the National Aeronautics and Space Administration (NASA) and the Air Force Research Laboratory (A FRL) to support the visualization of air traffic data from the Airspa ce Concept Evaluation System (ACES) simulation program. The three mai n challenges tackled by IV4D developers were: 1) determining how to d istill multiple NASA data formats into a few minimal dataset types; 2 ) creating an environment, consisting of a user interface, heuristic algorithms, and retained metadata, that facilitates easy setup and fa st visualization; and 3) maximizing the user?s ability to utilize the extended range of visualization available with AFRL?s existing 3D te chnologies. IV4D is currently being used by air traffic management re searchers at NASA?s Ames and Langley Research Centers to support data visualizations.

  11. Dramatic Contrasts in Arctic vs Antarctic Sea Ice Trends in 3-D Visualizations and Compilations of Monthly Record Highs and Lows

    NASA Technical Reports Server (NTRS)

    Parkinson, Claire L.; DiGirolamo, Nicolo E.

    2016-01-01

    New visualizations dramatically display the decreases in Arctic sea ice coverage over the years 1979-2015, apparent in each month of the year, with not a single record high in ice extents occurring in any month since 1986, a time period with 75 monthly record lows. Results are less dramatic in the Antarctic, but intriguingly in the opposite direction, with only 6 record lows since 1986 and 45 record highs.

  12. 3D Visualization of Developmental Toxicity of 2,4,6-Trinitrotoluene in Zebrafish Embryogenesis Using Light-Sheet Microscopy

    PubMed Central

    Eum, Juneyong; Kwak, Jina; Kim, Hee Joung; Ki, Seoyoung; Lee, Kooyeon; Raslan, Ahmed A.; Park, Ok Kyu; Chowdhury, Md Ashraf Uddin; Her, Song; Kee, Yun; Kwon, Seung-Hae; Hwang, Byung Joon

    2016-01-01

    Environmental contamination by trinitrotoluene is of global concern due to its widespread use in military ordnance and commercial explosives. Despite known long-term persistence in groundwater and soil, the toxicological profile of trinitrotoluene and other explosive wastes have not been systematically measured using in vivo biological assays. Zebrafish embryos are ideal model vertebrates for high-throughput toxicity screening and live in vivo imaging due to their small size and transparency during embryogenesis. Here, we used Single Plane Illumination Microscopy (SPIM)/light sheet microscopy to assess the developmental toxicity of explosive-contaminated water in zebrafish embryos and report 2,4,6-trinitrotoluene-associated developmental abnormalities, including defects in heart formation and circulation, in 3D. Levels of apoptotic cell death were higher in the actively developing tissues of trinitrotoluene-treated embryos than controls. Live 3D imaging of heart tube development at cellular resolution by light-sheet microscopy revealed trinitrotoluene-associated cardiac toxicity, including hypoplastic heart chamber formation and cardiac looping defects, while the real time PCR (polymerase chain reaction) quantitatively measured the molecular changes in the heart and blood development supporting the developmental defects at the molecular level. Identification of cellular toxicity in zebrafish using the state-of-the-art 3D imaging system could form the basis of a sensitive biosensor for environmental contaminants and be further valued by combining it with molecular analysis. PMID:27869673

  13. Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.

    PubMed

    Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R

    2018-07-01

    We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.

  14. Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.

    PubMed

    Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N

    2016-01-01

    Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.

  15. DataHigh: Graphical user interface for visualizing and interacting with high-dimensional neural activity

    PubMed Central

    Cowley, Benjamin R.; Kaufman, Matthew T.; Churchland, Mark M.; Ryu, Stephen I.; Shenoy, Krishna V.; Yu, Byron M.

    2013-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition- and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity. PMID:23366954

  16. DataHigh: graphical user interface for visualizing and interacting with high-dimensional neural activity.

    PubMed

    Cowley, Benjamin R; Kaufman, Matthew T; Churchland, Mark M; Ryu, Stephen I; Shenoy, Krishna V; Yu, Byron M

    2012-01-01

    The activity of tens to hundreds of neurons can be succinctly summarized by a smaller number of latent variables extracted using dimensionality reduction methods. These latent variables define a reduced-dimensional space in which we can study how population activity varies over time, across trials, and across experimental conditions. Ideally, we would like to visualize the population activity directly in the reduced-dimensional space, whose optimal dimensionality (as determined from the data) is typically greater than 3. However, direct plotting can only provide a 2D or 3D view. To address this limitation, we developed a Matlab graphical user interface (GUI) that allows the user to quickly navigate through a continuum of different 2D projections of the reduced-dimensional space. To demonstrate the utility and versatility of this GUI, we applied it to visualize population activity recorded in premotor and motor cortices during reaching tasks. Examples include single-trial population activity recorded using a multi-electrode array, as well as trial-averaged population activity recorded sequentially using single electrodes. Because any single 2D projection may provide a misleading impression of the data, being able to see a large number of 2D projections is critical for intuition-and hypothesis-building during exploratory data analysis. The GUI includes a suite of additional interactive tools, including playing out population activity timecourses as a movie and displaying summary statistics, such as covariance ellipses and average timecourses. The use of visualization tools like the GUI developed here, in tandem with dimensionality reduction methods, has the potential to further our understanding of neural population activity.

  17. Bag-of-visual-phrases and hierarchical deep models for traffic sign detection and recognition in mobile laser scanning data

    NASA Astrophysics Data System (ADS)

    Yu, Yongtao; Li, Jonathan; Wen, Chenglu; Guan, Haiyan; Luo, Huan; Wang, Cheng

    2016-03-01

    This paper presents a novel algorithm for detection and recognition of traffic signs in mobile laser scanning (MLS) data for intelligent transportation-related applications. The traffic sign detection task is accomplished based on 3-D point clouds by using bag-of-visual-phrases representations; whereas the recognition task is achieved based on 2-D images by using a Gaussian-Bernoulli deep Boltzmann machine-based hierarchical classifier. To exploit high-order feature encodings of feature regions, a deep Boltzmann machine-based feature encoder is constructed. For detecting traffic signs in 3-D point clouds, the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.956, 0.946, 0.907, and 0.951, respectively, on the four selected MLS datasets. For on-image traffic sign recognition, a recognition accuracy of 97.54% is achieved by using the proposed hierarchical classifier. Comparative studies with the existing traffic sign detection and recognition methods demonstrate that our algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.

  18. EEG-based usability assessment of 3D shutter glasses

    NASA Astrophysics Data System (ADS)

    Wenzel, Markus A.; Schultze-Kraft, Rafael; Meinecke, Frank C.; Cardinaux, Fabien; Kemp, Thomas; Müller, Klaus-Robert; Curio, Gabriel; Blankertz, Benjamin

    2016-02-01

    Objective. Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the ‘neural flicker’ vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Approach. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Main Result. Effects of the shutter glasses were traced in the EEG up to around 67 Hz—about 20 Hz over the flicker perception threshold—and vanished at the subsequent frequency level of 77 Hz. Significance. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  19. EEG-based usability assessment of 3D shutter glasses.

    PubMed

    Wenzel, Markus A; Schultze-Kraft, Rafael; Meinecke, Frank C; Fabien Cardinaux; Kemp, Thomas; Klaus-Robert Müller; Gabriel Curio; Benjamin Blankertz

    2016-02-01

    Neurotechnology can contribute to the usability assessment of products by providing objective measures of neural workload and can uncover usability impediments that are not consciously perceived by test persons. In this study, the neural processing effort imposed on the viewer of 3D television by shutter glasses was quantified as a function of shutter frequency. In particular, we sought to determine the critical shutter frequency at which the 'neural flicker' vanishes, such that visual fatigue due to this additional neural effort can be prevented by increasing the frequency of the system. Twenty-three participants viewed an image through 3D shutter glasses, while multichannel electroencephalogram (EEG) was recorded. In total ten shutter frequencies were employed, selected individually for each participant to cover the range below, at and above the threshold of flicker perception. The source of the neural flicker correlate was extracted using independent component analysis and the flicker impact on the visual cortex was quantified by decoding the state of the shutter from the EEG. Effects of the shutter glasses were traced in the EEG up to around 67 Hz-about 20 Hz over the flicker perception threshold-and vanished at the subsequent frequency level of 77 Hz. The impact of the shutter glasses on the visual cortex can be detected by neurotechnology even when a flicker is not reported by the participants. Potential impact. Increasing the shutter frequency from the usual 50 Hz or 60 Hz to 77 Hz reduces the risk of visual fatigue and thus improves shutter-glass-based 3D usability.

  20. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  1. Realistic tissue visualization using photoacoustic image

    NASA Astrophysics Data System (ADS)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  2. Real-time classification of vehicles by type within infrared imagery

    NASA Astrophysics Data System (ADS)

    Kundegorski, Mikolaj E.; Akçay, Samet; Payen de La Garanderie, Grégoire; Breckon, Toby P.

    2016-10-01

    Real-time classification of vehicles into sub-category types poses a significant challenge within infra-red imagery due to the high levels of intra-class variation in thermal vehicle signatures caused by aspects of design, current operating duration and ambient thermal conditions. Despite these challenges, infra-red sensing offers significant generalized target object detection advantages in terms of all-weather operation and invariance to visual camouflage techniques. This work investigates the accuracy of a number of real-time object classification approaches for this task within the wider context of an existing initial object detection and tracking framework. Specifically we evaluate the use of traditional feature-driven bag of visual words and histogram of oriented gradient classification approaches against modern convolutional neural network architectures. Furthermore, we use classical photogrammetry, within the context of current target detection and classification techniques, as a means of approximating 3D target position within the scene based on this vehicle type classification. Based on photogrammetric estimation of target position, we then illustrate the use of regular Kalman filter based tracking operating on actual 3D vehicle trajectories. Results are presented using a conventional thermal-band infra-red (IR) sensor arrangement where targets are tracked over a range of evaluation scenarios.

  3. Activity-dependent regulation of NMDAR1 immunoreactivity in the developing visual cortex.

    PubMed

    Catalano, S M; Chang, C K; Shatz, C J

    1997-11-01

    NMDA receptors have been implicated in activity-dependent synaptic plasticity in the developing visual cortex. We examined the distribution of immunocytochemically detectable NMDAR1 in visual cortex of cats and ferrets from late embryonic ages to adulthood. Cortical neurons are initially highly immunostained. This level declines gradually over development, with the notable exception of cortical layers 2/3, where levels of NMDAR1 immunostaining remain high into adulthood. Within layer 4, the decline in NMDAR1 immunostaining to adult levels coincides with the completion of ocular dominance column formation and the end of the critical period for layer 4. To determine whether NMDAR1 immunoreactivity is regulated by retinal activity, animals were dark-reared or retinal activity was completely blocked in one eye with tetrodotoxin (TTX). Dark-rearing does not cause detectable changes in NMDAR1 immunoreactivity. However, 2 weeks of monocular TTX administration decreases NMDAR1 immunoreactivity in layer 4 of the columns of the blocked eye. Thus, high levels of NMDAR1 immunostaining within the visual cortex are temporally correlated with ocular dominance column formation and developmental plasticity; the persistence of staining in layers 2/3 also correlates with the physiological plasticity present in these layers in the adult. In addition, visual experience is not required for the developmental changes in the laminar pattern of NMDAR1 levels, but the presence of high levels of NMDAR1 in layer 4 during the critical period does require retinal activity. These observations are consistent with a central role for NMDA receptors in promoting and ultimately limiting synaptic rearrangements in the developing neocortex.

  4. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  6. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  7. Data Visualization Using Immersive Virtual Reality Tools

    NASA Astrophysics Data System (ADS)

    Cioc, Alexandru; Djorgovski, S. G.; Donalek, C.; Lawler, E.; Sauer, F.; Longo, G.

    2013-01-01

    The growing complexity of scientific data poses serious challenges for an effective visualization. Data sets, e.g., catalogs of objects detected in sky surveys, can have a very high dimensionality, ~ 100 - 1000. Visualizing such hyper-dimensional data parameter spaces is essentially impossible, but there are ways of visualizing up to ~ 10 dimensions in a pseudo-3D display. We have been experimenting with the emerging technologies of immersive virtual reality (VR) as a platform for a scientific, interactive, collaborative data visualization. Our initial experiments used the virtual world of Second Life, and more recently VR worlds based on its open source code, OpenSimulator. There we can visualize up to ~ 100,000 data points in ~ 7 - 8 dimensions (3 spatial and others encoded as shapes, colors, sizes, etc.), in an immersive virtual space where scientists can interact with their data and with each other. We are now developing a more scalable visualization environment using the popular (practically an emerging standard) Unity 3D Game Engine, coded using C#, JavaScript, and the Unity Scripting Language. This visualization tool can be used through a standard web browser, or a standalone browser of its own. Rather than merely plotting data points, the application creates interactive three-dimensional objects of various shapes, colors, and sizes, and of course the XYZ positions, encoding various dimensions of the parameter space, that can be associated interactively. Multiple users can navigate through this data space simultaneously, either with their own, independent vantage points, or with a shared view. At this stage ~ 100,000 data points can be easily visualized within seconds on a simple laptop. The displayed data points can contain linked information; e.g., upon a clicking on a data point, a webpage with additional information can be rendered within the 3D world. A range of functionalities has been already deployed, and more are being added. We expect to make this visualization tool freely available to the academic community within a few months, on an experimental (beta testing) basis.

  8. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  9. Toward the establishment of design guidelines for effective 3D perspective interfaces

    NASA Astrophysics Data System (ADS)

    Fitzhugh, Elisabeth; Dixon, Sharon; Aleva, Denise; Smith, Eric; Ghrayeb, Joseph; Douglas, Lisa

    2009-05-01

    The propagation of information operation technologies, with correspondingly vast amounts of complex network information to be conveyed, significantly impacts operator workload. Information management research is rife with efforts to develop schemes to aid operators to identify, review, organize, and retrieve the wealth of available data. Data may take on such distinct forms as intelligence libraries, logistics databases, operational environment models, or network topologies. Increased use of taxonomies and semantic technologies opens opportunities to employ network visualization as a display mechanism for diverse information aggregations. The broad applicability of network visualizations is still being tested, but in current usage, the complexity of densely populated abstract networks suggests the potential utility of 3D. Employment of 2.5D in network visualization, using classic perceptual cues, creates a 3D experience within a 2D medium. It is anticipated that use of 3D perspective (2.5D) will enhance user ability to visually inspect large, complex, multidimensional networks. Current research for 2.5D visualizations demonstrates that display attributes, including color, shape, size, lighting, atmospheric effects, and shadows, significantly impact operator experience. However, guidelines for utilization of attributes in display design are limited. This paper discusses pilot experimentation intended to identify potential problem areas arising from these cues and determine how best to optimize perceptual cue settings. Development of optimized design guidelines will ensure that future experiments, comparing network displays with other visualizations, are not confounded or impeded by suboptimal attribute characterization. Current experimentation is anticipated to support development of cost-effective, visually effective methods to implement 3D in military applications.

  10. Assessment of rural soundscapes with high-speed train noise.

    PubMed

    Lee, Pyoung Jik; Hong, Joo Young; Jeon, Jin Yong

    2014-06-01

    In the present study, rural soundscapes with high-speed train noise were assessed through laboratory experiments. A total of ten sites with varying landscape metrics were chosen for audio-visual recording. The acoustical characteristics of the high-speed train noise were analyzed using various noise level indices. Landscape metrics such as the percentage of natural features (NF) and Shannon's diversity index (SHDI) were adopted to evaluate the landscape features of the ten sites. Laboratory experiments were then performed with 20 well-trained listeners to investigate the perception of high-speed train noise in rural areas. The experiments consisted of three parts: 1) visual-only condition, 2) audio-only condition, and 3) combined audio-visual condition. The results showed that subjects' preference for visual images was significantly related to NF, the number of land types, and the A-weighted equivalent sound pressure level (LAeq). In addition, the visual images significantly influenced the noise annoyance, and LAeq and NF were the dominant factors affecting the annoyance from high-speed train noise in the combined audio-visual condition. In addition, Zwicker's loudness (N) was highly correlated with the annoyance from high-speed train noise in both the audio-only and audio-visual conditions. © 2013.

  11. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  12. 3D Printing of Plant Golgi Stacks from Their Electron Tomographic Models.

    PubMed

    Mai, Keith Ka Ki; Kang, Madison J; Kang, Byung-Ho

    2017-01-01

    Three-dimensional (3D) printing is an effective tool for preparing tangible 3D models from computer visualizations to assist in scientific research and education. With the recent popularization of 3D printing processes, it is now possible for individual laboratories to convert their scientific data into a physical form suitable for presentation or teaching purposes. Electron tomography is an electron microscopy method by which 3D structures of subcellular organelles or macromolecular complexes are determined at nanometer-level resolutions. Electron tomography analyses have revealed the convoluted membrane architectures of Golgi stacks, chloroplasts, and mitochondria. But the intricacy of their 3D organizations is difficult to grasp from tomographic models illustrated on computer screens. Despite the rapid development of 3D printing technologies, production of organelle models based on experimental data with 3D printing has rarely been documented. In this chapter, we present a simple guide to creating 3D prints of electron tomographic models of plant Golgi stacks using the two most accessible 3D printing technologies.

  13. 'Visual’ parsing can be taught quickly without visual experience during critical periods

    PubMed Central

    Reich, Lior; Amedi, Amir

    2015-01-01

    Cases of invasive sight-restoration in congenital blind adults demonstrated that acquiring visual abilities is extremely challenging, presumably because visual-experience during critical-periods is crucial for learning visual-unique concepts (e.g. size constancy). Visual rehabilitation can also be achieved using sensory-substitution-devices (SSDs) which convey visual information non-invasively through sounds. We tested whether one critical concept – visual parsing, which is highly-impaired in sight-restored patients – can be learned using SSD. To this end, congenitally blind adults participated in a unique, relatively short (~70 hours), SSD-‘vision’ training. Following this, participants successfully parsed 2D and 3D visual objects. Control individuals naïve to SSDs demonstrated that while some aspects of parsing with SSD are intuitive, the blind’s success could not be attributed to auditory processing alone. Furthermore, we had a unique opportunity to compare the SSD-users’ abilities to those reported for sight-restored patients who performed similar tasks visually, and who had months of eyesight. Intriguingly, the SSD-users outperformed the patients on most criteria tested. These suggest that with adequate training and technologies, key high-order visual features can be quickly acquired in adulthood, and lack of visual-experience during critical-periods can be somewhat compensated for. Practically, these highlight the potential of SSDs as standalone-aids or combined with invasive restoration approaches. PMID:26482105

  14. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  15. Measurable realistic image-based 3D mapping

    NASA Astrophysics Data System (ADS)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable realistic image-based (MRI) system can produce. The major contribution here is the implementation of measurable images on 3D maps to obtain various measurements from real scenes.

  16. Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.

    PubMed

    Kim, Soohwan; Kim, Jonghyuk

    2013-10-01

    Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.

  17. Magnetic assembly of 3D cell clusters: visualizing the formation of an engineered tissue.

    PubMed

    Ghosh, S; Kumar, S R P; Puri, I K; Elankumaran, S

    2016-02-01

    Contactless magnetic assembly of cells into 3D clusters has been proposed as a novel means for 3D tissue culture that eliminates the need for artificial scaffolds. However, thus far its efficacy has only been studied by comparing expression levels of generic proteins. Here, it has been evaluated by visualizing the evolution of cell clusters assembled by magnetic forces, to examine their resemblance to in vivo tissues. Cells were labeled with magnetic nanoparticles, then assembled into 3D clusters using magnetic force. Scanning electron microscopy was used to image intercellular interactions and morphological features of the clusters. When cells were held together by magnetic forces for a single day, they formed intercellular contacts through extracellular fibers. These kept the clusters intact once the magnetic forces were removed, thus serving the primary function of scaffolds. The cells self-organized into constructs consistent with the corresponding tissues in vivo. Epithelial cells formed sheets while fibroblasts formed spheroids and exhibited position-dependent morphological heterogeneity. Cells on the periphery of a cluster were flattened while those within were spheroidal, a well-known characteristic of connective tissues in vivo. Cells assembled by magnetic forces presented visual features representative of their in vivo states but largely absent in monolayers. This established the efficacy of contactless assembly as a means to fabricate in vitro tissue models. © 2016 John Wiley & Sons Ltd.

  18. PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome

    NASA Astrophysics Data System (ADS)

    Ballmer, Maxim; Wiethoff, Tobias

    2016-04-01

    In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show "inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on the flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the Geosciences.

  19. PLANETarium Pilot: visualizing PLANET Earth inside-out on the planetarium's full-dome

    NASA Astrophysics Data System (ADS)

    Ballmer, M. D.; Wiethoff, T.

    2014-12-01

    In the past decade, projection systems in most planetariums, traditional sites of outreach and education, have advanced from interfaces that can display the motion of stars as moving beam spots to systems that are able to visualize multicolor, high-resolution, immersive full-dome videos or images. These extraordinary capabilities are ideally suited for visualization of global processes occurring on the surface and within the interior of the Earth, a spherical body just as the full dome. So far, however, our community has largely ignored this wonderful interface for outreach and education, and any previous geo-shows have mostly been limited to cartoon-style animations. Thus, we here propose a framework to convey recent scientific results on the origin and evolution of our PLANET to the >100 million per-year worldwide audience of planetariums, making the traditionally astronomy-focussed interface a true PLANETarium. In order to do this most efficiently, we intend to show „inside-out" visualizations of scientific datasets and models, as if the audience was positioned in the Earth's inner core. Such visualizations are expected to be renderable to the dome with little or no effort. For example, showing global geophysical datasets (e.g., gravity, air temperature), or horizontal slices of seismic-tomography images and spherical computer models requires no rendering at all. Rendering of 3D Cartesian datasets or models may further be achieved using standard techiques. Here, we show several example pilot animations. These animations rendered for the full dome are projected back to 2D for visualization on a flatscreen. Present-day science visualizations are typically as intuitive as cartoon-style animations, yet more appealing visually, and clearly with a higher level of detail. In addition to e.g. climate change and natural hazards, themes for any future geo-shows may include the coupled evolution of the Earth's interior and life, from the accretion of our planet to the evolution of mantle convection as well as the sustainment of a magnetic field and habitable conditions. We believe that high-quality tax-funded science visualizations should not exclusively be used for communication among scientists, but also recycled to raise the public's awareness and appreciation of the geosciences.

  20. Real time 3D visualization of intraoperative organ deformations using structured dictionary.

    PubMed

    Wang, Dan; Tewfik, Ahmed H

    2012-04-01

    Restricted visualization of the surgical field is one of the most critical challenges for minimally invasive surgery (MIS). Current intraoperative visualization systems are promising. However, they can hardly meet the requirements of high resolution and real time 3D visualization of the surgical scene to support the recognition of anatomic structures for safe MIS procedures. In this paper, we present a new approach for real time 3D visualization of organ deformations based on optical imaging patches with limited field-of-view and a single preoperative scan of magnetic resonance imaging (MRI) or computed tomography (CT). The idea for reconstruction is motivated by our empirical observation that the spherical harmonic coefficients corresponding to distorted surfaces of a given organ lie in lower dimensional subspaces in a structured dictionary that can be learned from a set of representative training surfaces. We provide both theoretical and practical designs for achieving these goals. Specifically, we discuss details about the selection of limited optical views and the registration of partial optical images with a single preoperative MRI/CT scan. The design proposed in this paper is evaluated with both finite element modeling data and ex vivo experiments. The ex vivo test is conducted on fresh porcine kidneys using 3D MRI scans with 1.2 mm resolution and a portable laser scanner with an accuracy of 0.13 mm. Results show that the proposed method achieves a sub-3 mm spatial resolution in terms of Hausdorff distance when using only one preoperative MRI scan and the optical patch from the single-sided view of the kidney. The reconstruction frame rate is between 10 frames/s and 39 frames/s depending on the complexity of the test model.

  1. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  2. Serum 25-hydroxyvitamin d levels and C-reactive protein in persons with human immunodeficiency virus infection.

    PubMed

    Poudel-Tandukar, Kalpana; Poudel, Krishna C; Jimba, Masamine; Kobayashi, Jun; Johnson, C Anderson; Palmer, Paula H

    2013-03-01

    Human immunodeficiency virus (HIV) infection has frequently been associated with vitamin D deficiency as well as chronic inflammatory response. We tested the hypothesis of an independent relationship between serum concentrations of 25-hydroxyvitamin D [25(OH)D] and high-sensitivity C-reactive protein (CRP) in a cohort of HIV-positive people. A cross-sectional survey was conducted among 316 HIV-positive people (181 men and 135 women) aged 16 to 60 years residing in the Kathmandu Valley, Nepal. Serum high-sensitivity CRP concentrations and serum 25(OH)D levels were measured by the latex agglutination nephelometry method and the competitive protein-binding assay, respectively. The relationship between serum CRP concentrations and 25(OH)D serum level was assessed using multiple logistic regression analysis with adjustment of potential cardiovascular and HIV-related factors. The proportions of participants with 25(OH)D serum levels <20 ng/ml, 20-30 ng/ml, and ≥30 ng/ml were 83.2%, 15.5%, and 1.3%, respectively. The mean 25(OH)D serum levels in men and women were 15.3 ng/ml and 14.4 ng/ml, respectively. Participants with a 25(OH)D serum level of <20 ng/ml had a 3.2-fold higher odds of high CRP (>3 mg/liter) compared to those with a 25(OH)D serum level of ≥20 ng/ml (p=0.005). Men and women with a 25(OH)D serum level of <20 ng/ml had 3.2- and 2.7-fold higher odds of high CRP (>3 mg/liter), respectively, compared to those with a 25(OH)D serum level of ≥20 ng/ml. The relationships remained significant only in men (p =0.02) but not in women (p=0.28). The risk of having a high level of inflammation (CRP>3 mg/liter) may be high among HIV-positive men and women with a 25(OH)D serum level of <20 ng/ml.

  3. High-Performance 3D Articulated Robot Display

    NASA Technical Reports Server (NTRS)

    Powell, Mark W.; Torres, Recaredo J.; Mittman, David S.; Kurien, James A.; Abramyan, Lucy

    2011-01-01

    In the domain of telerobotic operations, the primary challenge facing the operator is to understand the state of the robotic platform. One key aspect of understanding the state is to visualize the physical location and configuration of the platform. As there is a wide variety of mobile robots, the requirements for visualizing their configurations vary diversely across different platforms. There can also be diversity in the mechanical mobility, such as wheeled, tracked, or legged mobility over surfaces. Adaptable 3D articulated robot visualization software can accommodate a wide variety of robotic platforms and environments. The visualization has been used for surface, aerial, space, and water robotic vehicle visualization during field testing. It has been used to enable operations of wheeled and legged surface vehicles, and can be readily adapted to facilitate other mechanical mobility solutions. The 3D visualization can render an articulated 3D model of a robotic platform for any environment. Given the model, the software receives real-time telemetry from the avionics system onboard the vehicle and animates the robot visualization to reflect the telemetered physical state. This is used to track the position and attitude in real time to monitor the progress of the vehicle as it traverses its environment. It is also used to monitor the state of any or all articulated elements of the vehicle, such as arms, legs, or control surfaces. The visualization can also render other sorts of telemetered states visually, such as stress or strains that are measured by the avionics. Such data can be used to color or annotate the virtual vehicle to indicate nominal or off-nominal states during operation. The visualization is also able to render the simulated environment where the vehicle is operating. For surface and aerial vehicles, it can render the terrain under the vehicle as the avionics sends it location information (GPS, odometry, or star tracking), and locate the vehicle over or on the terrain correctly. For long traverses over terrain, the visualization can stream in terrain piecewise in order to maintain the current area of interest for the operator without incurring unreasonable resource constraints on the computing platform. The visualization software is designed to run on laptops that can operate in field-testing environments without Internet access, which is a frequently encountered situation when testing in remote locations that simulate planetary environments such as Mars and other planetary bodies.

  4. Collaborative Visualization and Analysis of Multi-dimensional, Time-dependent and Distributed Data in the Geosciences Using the Unidata Integrated Data Viewer

    NASA Astrophysics Data System (ADS)

    Meertens, C. M.; Murray, D.; McWhirter, J.

    2004-12-01

    Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.

  5. Evaluation of the effect of filter apodization for volume PET imaging using the 3-D RP algorithm

    NASA Astrophysics Data System (ADS)

    Baghaei, H.; Wong, Wai-Hoi; Li, Hongdi; Uribe, J.; Wang, Yu; Aykac, M.; Liu, Yaqiang; Xing, Tao

    2003-02-01

    We investigated the influence of filter apodization and cutoff frequency on the image quality of volume positron emission tomography (PET) imaging using the three-dimensional reprojection (3-D RP) algorithm. An important parameter in 3-D RP and other filtered backprojection algorithms is the choice of the filter window function. In this study, the Hann, Hamming, and Butterworth low-pass window functions were investigated. For each window, a range of cutoff frequencies was considered. Projection data were acquired by scanning a uniform cylindrical phantom, a cylindrical phantom containing four small lesion phantoms having diameters of 3, 4, 5, and 6 mm and the 3-D Hoffman brain phantom. All measurements were performed using the high-resolution PET camera developed at the M.D. Anderson Cancer Center (MDAPET), University of Texas, Houston, TX. This prototype camera, which is a multiring scanner with no septa, has an intrinsic transaxial resolution of 2.8 mm. The evaluation was performed by computing the noise level in the reconstructed images of the uniform phantom and the contrast recovery of the 6-mm hot lesion in a warm background and also by visually inspecting images, especially those of the Hoffman brain phantom. For this work, we mainly studied the central slices which are less affected by the incompleteness of the 3-D data. Overall, the Butterworth window offered a better contrast-noise performance over the Hann and Hamming windows. For our high statistics data, for the Hann and Hamming apodization functions a cutoff frequency of 0.6-0.8 of the Nyquist frequency resulted in a reasonable compromise between the contrast recovery and noise level and for the Butterworth window a cutoff frequency of 0.4-0.6 of the Nyquist frequency was a reasonable choice. For the low statistics data, use of lower cutoff frequencies was more appropriate.

  6. Comparison of a new refractive multifocal intraocular lens with an inferior segmental near add and a diffractive multifocal intraocular lens.

    PubMed

    Alio, Jorge L; Plaza-Puche, Ana B; Javaloy, Jaime; Ayala, María José; Moreno, Luis J; Piñero, David P

    2012-03-01

    To compare the visual acuity outcomes and ocular optical performance of eyes implanted with a multifocal refractive intraocular lens (IOL) with an inferior segmental near add or a diffractive multifocal IOL. Prospective, comparative, nonrandomized, consecutive case series. Eighty-three consecutive eyes of 45 patients (age range, 36-82 years) with cataract were divided into 2 groups: group A, 45 eyes implanted with Lentis Mplus LS-312 (Oculentis GmbH, Berlin, Germany); group B, 38 eyes implanted with diffractive IOL Acri.Lisa 366D (Zeiss, Oberkochen, Germany). All patients underwent phacoemulsification followed by IOL implantation in the capsular bag. Distance corrected, intermediate, and near with the distance correction visual acuity outcomes and contrast sensitivity, intraocular aberrations, and defocus curve were evaluated postoperatively during a 3-month follow-up. Uncorrected distance visual acuity (UDVA), corrected distance visual acuity (CDVA), uncorrected near visual acuity (UNVA), corrected distance near and intermediate visual acuity (CDNVA), contrast sensitivity, intraocular aberrations, and defocus curve. A significant improvement in UDVA, CDVA, and UNVA was observed in both groups after surgery (P ≤ 0.04). Significantly better values of UNVA (P<0.01) and CDNVA (P<0.04) were found in group B. In the defocus curve, significantly better visual acuities were present in eyes in group A for intermediate vision levels of defocus (P ≤ 0.04). Significantly higher amounts of postoperative intraocular primary coma and spherical aberrations were found in group A (P<0.01). In addition, significantly better values were observed in photopic contrast sensitivity for high spatial frequencies in group A (P ≤ 0.04). The Lentis Mplus LS-312 and Acri.Lisa 366D IOLs are able to successfully restore visual function after cataract surgery. The Lentis Mplus LS-312 provided better intermediate vision and contrast sensitivity outcomes than the Acri.Lisa 366D. However, the Acri.Lisa design provided better distance and near visual outcomes and intraocular optical performance parameters. Copyright © 2012 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

  7. Connectivity Reveals Sources of Predictive Coding Signals in Early Visual Cortex During Processing of Visual Optic Flow.

    PubMed

    Schindler, Andreas; Bartels, Andreas

    2017-05-01

    Superimposed on the visual feed-forward pathway, feedback connections convey higher level information to cortical areas lower in the hierarchy. A prominent framework for these connections is the theory of predictive coding where high-level areas send stimulus interpretations to lower level areas that compare them with sensory input. Along these lines, a growing body of neuroimaging studies shows that predictable stimuli lead to reduced blood oxygen level-dependent (BOLD) responses compared with matched nonpredictable counterparts, especially in early visual cortex (EVC) including areas V1-V3. The sources of these modulatory feedback signals are largely unknown. Here, we re-examined the robust finding of relative BOLD suppression in EVC evident during processing of coherent compared with random motion. Using functional connectivity analysis, we show an optic flow-dependent increase of functional connectivity between BOLD suppressed EVC and a network of visual motion areas including MST, V3A, V6, the cingulate sulcus visual area (CSv), and precuneus (Pc). Connectivity decreased between EVC and 2 areas known to encode heading direction: entorhinal cortex (EC) and retrosplenial cortex (RSC). Our results provide first evidence that BOLD suppression in EVC for predictable stimuli is indeed mediated by specific high-level areas, in accord with the theory of predictive coding. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Testing of visual field with virtual reality goggles in manual and visual grasp modes.

    PubMed

    Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas

    2014-01-01

    Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  9. Visual Acuity and Over-refraction in Myopic Children Fitted with Soft Multifocal Contact Lenses.

    PubMed

    Schulle, Krystal L; Berntsen, David A; Sinnott, Loraine T; Bickle, Katherine M; Gostovic, Anita T; Pierce, Gilbert E; Jones-Jordan, Lisa A; Mutti, Donald O; Walline, Jeffrey J

    2018-04-01

    Practitioners fitting contact lenses for myopia control frequently question whether a myopic child can achieve good vision with a high-add multifocal. We demonstrate that visual acuity is not different than spectacles with a commercially available, center-distance soft multifocal contact lens (MFCL) (Biofinity Multifocal "D"; +2.50 D add). To determine the spherical over-refraction (SOR) necessary to obtain best-corrected visual acuity (BCVA) when fitting myopic children with a center-distance soft MFCL. Children (n = 294) aged 7 to 11 years with myopia (spherical component) of -0.75 to -5.00 diopters (D) (inclusive) and 1.00 D cylinder or less (corneal plane) were fitted bilaterally with +2.50 D add Biofinity "D" MFCLs. The initial MFCL power was the spherical equivalent of a standardized subjective refraction, rounded to the nearest 0.25 D step (corneal plane). An SOR was performed monocularly (each eye) to achieve BCVA. Binocular, high-contrast logMAR acuity was measured with manifest spectacle correction and MFCLs with over-refraction. Photopic pupil size was measured with a pupilometer. The mean (±SD) age was 10.3 ± 1.2 years, and the mean (±SD) SOR needed to achieve BCVA was OD: -0.61 ± 0.24 D/OS: -0.58 ± 0.27 D. There was no difference in binocular high-contrast visual acuity (logMAR) between spectacles (-0.01 ± 0.06) and best-corrected MFCLs (-0.01 ± 0.07) (P = .59). The mean (±SD) photopic pupil size (5.4 ± 0.7 mm) was not correlated with best MFCL correction or the over-refraction magnitude (both P ≥ .09). Children achieved BCVA with +2.50 D add MFCLs that was not different than with spectacles. Children typically required an over-refraction of -0.50 to -0.75 D to achieve BCVA. With a careful over-refraction, these +2.50 D add MFCLs provide good distance acuity, making them viable candidates for myopia control.

  10. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    NASA Technical Reports Server (NTRS)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  11. NeuroTessMesh: A Tool for the Generation and Visualization of Neuron Meshes and Adaptive On-the-Fly Refinement.

    PubMed

    Garcia-Cantero, Juan J; Brito, Juan P; Mata, Susana; Bayona, Sofia; Pastor, Luis

    2017-01-01

    Gaining a better understanding of the human brain continues to be one of the greatest challenges for science, largely because of the overwhelming complexity of the brain and the difficulty of analyzing the features and behavior of dense neural networks. Regarding analysis, 3D visualization has proven to be a useful tool for the evaluation of complex systems. However, the large number of neurons in non-trivial circuits, together with their intricate geometry, makes the visualization of a neuronal scenario an extremely challenging computational problem. Previous work in this area dealt with the generation of 3D polygonal meshes that approximated the cells' overall anatomy but did not attempt to deal with the extremely high storage and computational cost required to manage a complex scene. This paper presents NeuroTessMesh, a tool specifically designed to cope with many of the problems associated with the visualization of neural circuits that are comprised of large numbers of cells. In addition, this method facilitates the recovery and visualization of the 3D geometry of cells included in databases, such as NeuroMorpho, and provides the tools needed to approximate missing information such as the soma's morphology. This method takes as its only input the available compact, yet incomplete, morphological tracings of the cells as acquired by neuroscientists. It uses a multiresolution approach that combines an initial, coarse mesh generation with subsequent on-the-fly adaptive mesh refinement stages using tessellation shaders. For the coarse mesh generation, a novel approach, based on the Finite Element Method, allows approximation of the 3D shape of the soma from its incomplete description. Subsequently, the adaptive refinement process performed in the graphic card generates meshes that provide good visual quality geometries at a reasonable computational cost, both in terms of memory and rendering time. All the described techniques have been integrated into NeuroTessMesh, available to the scientific community, to generate, visualize, and save the adaptive resolution meshes.

  12. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  13. Three-dimensional holoscopic image coding scheme using high-efficiency video coding with kernel-based minimum mean-square-error estimation

    NASA Astrophysics Data System (ADS)

    Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai

    2016-07-01

    Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.

  14. Automatic transfer function generation for volume rendering of high-resolution x-ray 3D digital mammography images

    NASA Astrophysics Data System (ADS)

    Alyassin, Abdal M.

    2002-05-01

    3D Digital mammography (3DDM) is a new technology that provides high resolution X-ray breast tomographic data. Like any other tomographic medical imaging modalities, viewing a stack of tomographic images may require time especially if the images are of large matrix size. In addition, it may cause difficulty to conceptually construct 3D breast structures. Therefore, there is a need to readily visualize the data in 3D. However, one of the issues that hinder the usage of volume rendering (VR) is finding an automatic way to generate transfer functions that efficiently map the important diagnostic information in the data. We have developed a method that randomly samples the volume. Based on the mean and the standard deviation of these samples, the technique determines the lower limit and upper limit of a piecewise linear ramp transfer function. We have volume rendered several 3DDM data using this technique and compared visually the outcome with the result from a conventional automatic technique. The transfer function generated through the proposed technique provided superior VR images over the conventional technique. Furthermore, the improvement in the reproducibility of the transfer function correlated with the number of samples taken from the volume at the expense of the processing time.

  15. DSSR-enhanced visualization of nucleic acid structures in Jmol.

    PubMed

    Hanson, Robert M; Lu, Xiang-Jun

    2017-07-03

    Sophisticated and interactive visualizations are essential for making sense of the intricate 3D structures of macromolecules. For proteins, secondary structural components are routinely featured in molecular graphics visualizations. However, the field of RNA structural bioinformatics is still lagging behind; for example, current molecular graphics tools lack built-in support even for base pairs, double helices, or hairpin loops. DSSR (Dissecting the Spatial Structure of RNA) is an integrated and automated command-line tool for the analysis and annotation of RNA tertiary structures. It calculates a comprehensive and unique set of features for characterizing RNA, as well as DNA structures. Jmol is a widely used, open-source Java viewer for 3D structures, with a powerful scripting language. JSmol, its reincarnation based on native JavaScript, has a predominant position in the post Java-applet era for web-based visualization of molecular structures. The DSSR-Jmol integration presented here makes salient features of DSSR readily accessible, either via the Java-based Jmol application itself, or its HTML5-based equivalent, JSmol. The DSSR web service accepts 3D coordinate files (in mmCIF or PDB format) initiated from a Jmol or JSmol session and returns DSSR-derived structural features in JSON format. This seamless combination of DSSR and Jmol/JSmol brings the molecular graphics of 3D RNA structures to a similar level as that for proteins, and enables a much deeper analysis of structural characteristics. It fills a gap in RNA structural bioinformatics, and is freely accessible (via the Jmol application or the JSmol-based website http://jmol.x3dna.org). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  16. Visual disability, visual function, and myopia among rural chinese secondary school children: the Xichang Pediatric Refractive Error Study (X-PRES)--report 1.

    PubMed

    Congdon, Nathan; Wang, Yunfei; Song, Yue; Choi, Kai; Zhang, Mingzhi; Zhou, Zhongxia; Xie, Zhenling; Li, Liping; Liu, Xueyu; Sharma, Abhishek; Wu, Bin; Lam, Dennis S C

    2008-07-01

    To evaluate visual acuity, visual function, and prevalence of refractive error among Chinese secondary-school children in a cross-sectional school-based study. Uncorrected, presenting, and best corrected visual acuity, cycloplegic autorefraction with refinement, and self-reported visual function were assessed in a random, cluster sample of rural secondary school students in Xichang, China. Among the 1892 subjects (97.3% of the consenting children, 84.7% of the total sample), mean age was 14.7 +/- 0.8 years, 51.2% were female, and 26.4% were wearing glasses. The proportion of children with uncorrected, presenting, and corrected visual disability (< or = 6/12 in the better eye) was 41.2%, 19.3%, and 0.5%, respectively. Myopia < -0.5, < -2.0, and < -6.0 D in both eyes was present in 62.3%, 31.1%, and 1.9% of the subjects, respectively. Among the children with visual disability when tested without correction, 98.7% was due to refractive error, while only 53.8% (414/770) of these children had appropriate correction. The girls had significantly (P < 0.001) more presenting visual disability and myopia < -2.0 D than did the boys. More myopic refractive error was associated with worse self-reported visual function (ANOVA trend test, P < 0.001). Visual disability in this population was common, highly correctable, and frequently uncorrected. The impact of refractive error on self-reported visual function was significant. Strategies and studies to understand and remove barriers to spectacle wear are needed.

  17. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  18. Intensity-based 2D 3D registration for lead localization in robot guided deep brain stimulation

    NASA Astrophysics Data System (ADS)

    Hunsche, Stefan; Sauner, Dieter; El Majdoub, Faycal; Neudorfer, Clemens; Poggenborg, Jörg; Goßmann, Axel; Maarouf, Mohammad

    2017-03-01

    Intraoperative assessment of lead localization has become a standard procedure during deep brain stimulation surgery in many centers, allowing immediate verification of targeting accuracy and, if necessary, adjustment of the trajectory. The most suitable imaging modality to determine lead positioning, however, remains controversially discussed. Current approaches entail the implementation of computed tomography and magnetic resonance imaging. In the present study, we adopted the technique of intensity-based 2D 3D registration that is commonly employed in stereotactic radiotherapy and spinal surgery. For this purpose, intraoperatively acquired 2D x-ray images were fused with preoperative 3D computed tomography (CT) data to verify lead placement during stereotactic robot assisted surgery. Accuracy of lead localization determined from 2D 3D registration was compared to conventional 3D 3D registration in a subsequent patient study. The mean Euclidian distance of lead coordinates estimated from intensity-based 2D 3D registration versus flat-panel detector CT 3D 3D registration was 0.7 mm  ±  0.2 mm. Maximum values of these distances amounted to 1.2 mm. To further investigate 2D 3D registration a simulation study was conducted, challenging two observers to visually assess artificially generated 2D 3D registration errors. 95% of deviation simulations, which were visually assessed as sufficient, had a registration error below 0.7 mm. In conclusion, 2D 3D intensity-based registration revealed high accuracy and reliability during robot guided stereotactic neurosurgery and holds great potential as a low dose, cost effective means for intraoperative lead localization.

  19. Early detection of glaucoma by means of a novel 3D computer‐automated visual field test

    PubMed Central

    Nazemi, Paul P; Fink, Wolfgang; Sadun, Alfredo A; Francis, Brian; Minckler, Donald

    2007-01-01

    Purpose A recently devised 3D computer‐automated threshold Amsler grid test was used to identify early and distinctive defects in people with suspected glaucoma. Further, the location, shape and depth of these field defects were characterised. Finally, the visual fields were compared with those obtained by standard automated perimetry. Patients and methods Glaucoma suspects were defined as those having elevated intraocular pressure (>21 mm Hg) or cup‐to‐disc ratio of >0.5. 33 patients and 66 eyes with risk factors for glaucoma were examined. 15 patients and 23 eyes with no risk factors were tested as controls. The recently developed 3D computer‐automated threshold Amsler grid test was used. The test exhibits a grid on a computer screen at a preselected greyscale and angular resolution, and allows patients to trace those areas on the grid that are missing in their visual field using a touch screen. The 5‐minute test required that the patients repeatedly outline scotomas on a touch screen with varied displays of contrast while maintaining their gaze on a central fixation marker. A 3D depiction of the visual field defects was then obtained that was further characterised by the location, shape and depth of the scotomas. The exam was repeated three times per eye. The results were compared to Humphrey visual field tests (ie, achromatic standard or SITA standard 30‐2 or 24‐2). Results In this pilot study 79% of the eyes tested in the glaucoma‐suspect group repeatedly demonstrated visual field loss with the 3D perimetry. The 3D depictions of visual field loss associated with these risk factors were all characteristic of or compatible with glaucoma. 71% of the eyes demonstrated arcuate defects or a nasal step. Constricted visual fields were shown in 29% of the eyes. No visual field changes were detected in the control group. Conclusions The 3D computer‐automated threshold Amsler grid test may demonstrate visual field abnormalities characteristic of glaucoma in glaucoma suspects with normal achromatic Humphrey visual field testing. This test may be used as a screening tool for the early detection of glaucoma. PMID:17504855

  20. Automated virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.

    1997-05-01

    Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.

  1. Visual deprivation alters dendritic bundle architecture in layer 4 of rat visual cortex.

    PubMed

    Gabbott, P L; Stewart, M G

    2012-04-05

    The effect of visual deprivation followed by light exposure on the tangential organisation of dendritic bundles passing through layer 4 of the rat visual cortex was studied quantitatively in the light microscope. Four groups of animals were investigated: (I) rats reared in an environment illuminated normally--group 52 dL; (II) rats reared in the dark until 21 days postnatum (DPN) and subsequently light exposed for 31 days-group 21/31; (III) rats dark reared until 52 DPN and then subsequently light exposed for 3 days--group 3 dL; and (IV) rats totally dark reared until 52 DPN--group 52 DPN. Each group contained five animals. Semithin 0.5-1-μm thick resin-embedded sections were collected from tangential sampling levels through the middle of layer 4 in area 17 and stained with Toluidine Blue. These sections were used to quantitatively analyse the composition and distribution of dendritic clusters in the tangential plane. The key result of this study indicates a significant reduction in the mean number of medium- and small-sized dendritic profiles (diameter less than 2 μm) contributing to clusters in layer 4 of groups 3 dL and 52 dD compared with group 21/31. No differences were detected in the mean number of large-sized dendritic profiles composing a bundle in these experimental groups. Moreover, the mean number of clusters and their tangential distribution in layer 4 did not vary significantly between all four groups. Finally, the clustering parameters were not significantly different between groups 21/31 and the normally reared group 52 dL. This study demonstrates, for the first time, that extended periods of dark rearing followed by light exposure can alter the morphological composition of dendritic bundles in thalamorecipient layer 4 of rat visual cortex. Because these changes occur in the primary region of thalamocortical input, they may underlie specific alterations in the processing of visual information both cortically and subcortically during periods of dark rearing and light exposure. Copyright © 2012 IBRO. Published by Elsevier Ltd. All rights reserved.

  2. Real-time three-dimensional transesophageal echocardiography in valve disease: comparison with surgical findings and evaluation of prosthetic valves.

    PubMed

    Sugeng, Lissa; Shernan, Stanton K; Weinert, Lynn; Shook, Doug; Raman, Jai; Jeevanandam, Valluvan; DuPont, Frank; Fox, John; Mor-Avi, Victor; Lang, Roberto M

    2008-12-01

    Recently, a novel real-time 3-dimensional (3D) matrix-array transesophageal echocardiographic (3D-MTEE) probe was found to be highly effective in the evaluation of native mitral valves (MVs) and other intracardiac structures, including the interatrial septum and left atrial appendage. However, the ability to visualize prosthetic valves using this transducer has not been evaluated. Moreover, the diagnostic accuracy of this new technology has never been validated against surgical findings. This study was designed to (1) assess the quality of 3D-MTEE images of prosthetic valves and (2) determine the potential value of 3D-MTEE imaging in the preoperative assessment of valvular pathology by comparing images with surgical findings. Eighty-seven patients undergoing clinically indicated transesophageal echocardiography were studied. In 40 patients, 3D-MTEE images of prosthetic MVs, aortic valves (AVs), and tricuspid valves (TVs) were scored for the quality of visualization. For both MVs and AVs, mechanical and bioprosthetic valves, the rings and leaflets were scored individually. In 47 additional patients, intraoperative 3D-MTEE diagnoses of MV pathology obtained before initiating cardiopulmonary bypass were compared with surgical findings. For the visualization of prosthetic MVs and annuloplasty rings, quality was superior compared with AV and TV prostheses. In addition, 3D-MTEE imaging had 96% agreement with surgical findings. Three-dimensional matrix-array transesophageal echocardiographic imaging provides superb imaging and accurate presurgical evaluation of native MV pathology and prostheses. However, the current technology is less accurate for the clinical assessment of AVs and TVs. Fast acquisition and immediate online display will make this the modality of choice for MV surgical planning and postsurgical follow-up.

  3. Thyroid gland visualization with 3D/4D ultrasound: integrated hands-on imaging in anatomical dissection laboratory.

    PubMed

    Carter, John L; Patel, Ankura; Hocum, Gabriel; Benninger, Brion

    2017-05-01

    In teaching anatomy, clinical imaging has been utilized to supplement the traditional dissection laboratory promoting education through visualization of spatial relationships of anatomical structures. Viewing the thyroid gland using 3D/4D ultrasound can be valuable to physicians as well as students learning anatomy. The objective of this study was to investigate the perceptions of first-year medical students regarding the integration of 3D/4D ultrasound visualization of spatial anatomy during anatomical education. 108 first-year medical students were introduced to 3D/4D ultrasound imaging of the thyroid gland through a detailed 20-min tutorial taught in small group format. Students then practiced 3D/4D ultrasound imaging on volunteers and donor cadavers before assessment through acquisition and identification of thyroid gland on at least three instructor-verified images. A post-training survey was administered assessing student impression. All students visualized the thyroid gland using 3D/4D ultrasound. Students revealed 88.0% strongly agreed or agreed 3D/4D ultrasound is useful revealing the thyroid gland and surrounding structures and 87.0% rated the experience "Very Easy" or "Easy", demonstrating benefits and ease of use including 3D/4D ultrasound in anatomy courses. When asked, students felt 3D/4D ultrasound is useful in teaching the structure and surrounding anatomy of the thyroid gland, they overwhelmingly responded "Strongly Agree" or "Agree" (90.2%). This study revealed that 3D/4D ultrasound was successfully used and preferred over 2D ultrasound by medical students during anatomy dissection courses to accurately identify the thyroid gland. In addition, 3D/4D ultrasound may nurture and further reinforce stereostructural spatial relationships of the thyroid gland taught during anatomy dissection.

  4. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  5. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  6. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  7. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822

  8. Bone suppression in CT angiography data by region-based multiresolution segmentation

    NASA Astrophysics Data System (ADS)

    Blaffert, Thomas; Wiemker, Rafael; Lin, Zhong Min

    2003-05-01

    Multi slice CT (MSCT) scanners have the advantage of high and isotropic image resolution, which broadens the range of examinations for CT angiography (CTA). A very important method to present the large amount of high-resolution 3D data is the visualization by maximum intensity projections (MIP). A problem with MIP projections in angiography is that bones often hide the vessels of interest, especially the scull and vertebral column. Software tools for a manual selection of bone regions and their suppression in the MIP are available, but processing is time-consuming and tedious. A highly computer-assisted of even fully automated suppression of bones would considerably speed up the examination and probably increase the number of examined cases. In this paper we investigate the suppression (or removal) of bone regions in 3D CT data sets for vascular examinations of the head with a visualization of the carotids and the circle of Willis.

  9. A Novel and Freely Available Interactive 3d Model of the Internal Carotid Artery.

    PubMed

    Valera-Melé, Marc; Puigdellívol-Sánchez, Anna; Mavar-Haramija, Marija; Juanes-Méndez, Juan A; San-Román, Luis; de Notaris, Matteo; Prats-Galino, Alberto

    2018-03-05

    We describe a new and freely available 3D interactive model of the intracranial internal carotid artery (ICA) and the skull base that also allows to display and compare its main segment classifications. High-resolution 3D human angiography (isometric voxel's size 0.36 mm) and Computed Tomography angiography images were exported to Virtual Reality Modeling Language (VRML) format for processing in a 3D software platform and embedding in a 3D Portable Document Format (PDF) document that can be freely downloaded at http://diposit.ub.edu/dspace/handle/2445/112442 and runs under Acrobat Reader on Mac and Windows computers and Windows 10 tablets. The 3D-PDF allows for visualisation and interaction through JavaScript-based functions (including zoom, rotation, selective visualization and transparentation of structures or a predefined sequence view of the main segment classifications if desired). The ICA and its main branches and loops, the Gasserian ganglion, the petrolingual ligament and the proximal and distal dural rings within the skull base environment (anterior and posterior clinoid processes, silla turcica, ethmoid and sphenoid bones, orbital fossae) may be visualized from different perspectives. This interactive 3D-PDF provides virtual views of the ICA and becomes an innovative tool to improve the understanding of the neuroanatomy of the ICA and surrounding structures.

  10. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  11. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  12. 3D documentation of footwear impressions and tyre tracks in snow with high resolution optical surface scanning.

    PubMed

    Buck, Ursula; Albertini, Nicola; Naether, Silvio; Thali, Michael J

    2007-09-13

    The three-dimensional documentation of footwear and tyre impressions in snow offers an opportunity to capture additional fine detail for the identification as present photographs. For this approach, up to now, different casting methods have been used. Casting of footwear impressions in snow has always been a difficult assignment. This work demonstrates that for the three-dimensional documentation of impressions in snow the non-destructive method of 3D optical surface scanning is suitable. The new method delivers more detailed results of higher accuracy than the conventional casting techniques. The results of this easy to use and mobile 3D optical surface scanner were very satisfactory in different meteorological and snow conditions. The method is also suitable for impressions in soil, sand or other materials. In addition to the side by side comparison, the automatic comparison of the 3D models and the computation of deviations and accuracy of the data simplify the examination and delivers objective and secure results. The results can be visualized efficiently. Data exchange between investigating authorities at a national or an international level can be achieved easily with electronic data carriers.

  13. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human

  14. Prevalence of amblyopia and refractive errors in an unscreened population of children.

    PubMed

    Polling, Jan-Roelof; Loudon, Sjoukje E; Klaver, Caroline C W

    2012-11-01

    To describe the frequency of refractive errors and amblyopia in unscreened children aged 2 months to 12 years from a rural town in Poland. Five hundred ninety-one children were identified by medical records and examined in a standardized manner.Visual acuity was measured using LogMAR charts; refractive error was determined using retinoscopy or autorefraction after cycloplegia. Myopia was defined as spherical equivalent (SE) ≤ -0.50 D, emmetropia as SE between -0.5 D and+0.5 D, mild hyperopia as SE between +0.5 D and +2.0 D, and high hyperopia as SE Q+2.0 D. Amblyopia was classified as best-corrected visual acuity ≥0.3 (≤ 20/40) LogMAR, in combination with a 2 LogMAR line difference between the two eyes and the presence of an amblyogenic factor. Refractive errors ranged from 84.2% in children aged up to 2 years to 75.5% in those aged 10 to 12 years.Refractive error showed a myopic shift with age; myopia prevalence increased from 2.2% in those aged 6 to 7 years to 6.3% in those aged 10 to 12 years. Of the examined children, 77 (16.3%) had refractive errors, with visual loss; of these,60 (78%) did not use corrections. The prevalence of amblyopia was 3.1%, and refractive error attributed to the amblyopiain 9 of 13 (69%) children. Refractive errors are common in Caucasian children and often remain undiagnosed. The prevalence of amblyopia was three times higher in this unscreened population compared with screened populations. Greater awarenessof these common treatable visual conditions in children is warranted.

  15. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  16. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  17. Sandia MEMS Visualization Tools v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.

    This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less

  18. Phakic Intraocular Collamer Lens (Visian ICL) Implantation for Correction of Myopia in Spectacle-Aversive Special Needs Children.

    PubMed

    Tychsen, Lawrence; Faron, Nicholas; Hoekel, James

    2017-03-01

    A subset of children with high anisometropia or isoametropia and neurobehavioral disorders have chronic difficulties with spectacle or contact lens wear. We report the results of refractive surgery in a series of these children treated using bilateral or unilateral intraocular collamer lens (Visian ICL) implantation for moderate to high myopia. Prospective nonrandomized cohort study. Clinical course and outcome data were collated prospectively for 40 implanted eyes in 23 children (mean age 10.2 ± 5.3 years, range, 1.8-17 years). Myopia ranged from -3.0 to -14.5 diopters (D), mean -9.2 ± 3.5 D. Goal refraction was plano to +1 D. Correction was achieved by sulcus implantation of a Visian ICL (STAAR Surgical, Monrovia, California, USA) under general anesthesia. Mean follow-up was 15.1 months (range, 6-22 months). Thirty-five eyes (88%) were corrected to within ±1.0 D of goal refraction; the other 5 (12%) were corrected to within 1.5 D. Uncorrected distance visual acuity improved substantially in all eyes (from mean 20/1050 [logMAR 1.72] to mean 20/42 [logMAR 0.48]). Spherical regression at last follow-up was an average of +0.59 D. Visuomotor comorbidities (eg, amblyopia, nystagmus, foveopathy, optic neuropathy) accounted for residual postoperative subnormal visual acuity. Thirteen of the 23 children (57%) had a neurobehavioral disorder (eg, developmental delay/intellectual disability/mental retardation, Down syndrome, cerebral palsy, autism spectrum disorder). Eighty-five percent (11/13) of those children were reported to have enhanced visual awareness, attentiveness, or social interactions. Endothelial cell density was measureable in 6 cooperative children (10 eyes), showing an average 1% decline. Central corneal thickness, measured in all children, increased an average of 8 μm. Two children (8%) required unplanned return to the operating room on the first postoperative day to alleviate pupillary block caused by a nonpatent iridotomy. No other complications were encounterd. Visian ICL implantation improves visual function in special needs children who have moderate to high myopia and difficulties wearing glasses or contact lenses. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  20. A Gaze Independent Brain-Computer Interface Based on Visual Stimulation through Closed Eyelids

    NASA Astrophysics Data System (ADS)

    Hwang, Han-Jeong; Ferreria, Valeria Y.; Ulrich, Daniel; Kilic, Tayfun; Chatziliadis, Xenofon; Blankertz, Benjamin; Treder, Matthias

    2015-10-01

    A classical brain-computer interface (BCI) based on visual event-related potentials (ERPs) is of limited application value for paralyzed patients with severe oculomotor impairments. In this study, we introduce a novel gaze independent BCI paradigm that can be potentially used for such end-users because visual stimuli are administered on closed eyelids. The paradigm involved verbally presented questions with 3 possible answers. Online BCI experiments were conducted with twelve healthy subjects, where they selected one option by attending to one of three different visual stimuli. It was confirmed that typical cognitive ERPs can be evidently modulated by the attention of a target stimulus in eyes-closed and gaze independent condition, and further classified with high accuracy during online operation (74.58% ± 17.85 s.d.; chance level 33.33%), demonstrating the effectiveness of the proposed novel visual ERP paradigm. Also, stimulus-specific eye movements observed during stimulation were verified as reflex responses to light stimuli, and they did not contribute to classification. To the best of our knowledge, this study is the first to show the possibility of using a gaze independent visual ERP paradigm in an eyes-closed condition, thereby providing another communication option for severely locked-in patients suffering from complex ocular dysfunctions.

  1. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  2. Tetrahedral Hohlraum Visualization and Pointings

    NASA Astrophysics Data System (ADS)

    Klare, K. A.; Wallace, J. M.; Drake, D.

    1997-11-01

    In designing experiments for Omega, the tetrahedral hohlraum (a sphere with four holes) can make full use of all 60 beams. There are some complications: the beams must clear the laser entrance hole (LEH), must miss a central capsule, absolutely must not go out the other LEHs, and should distribute in the interior of the hohlraum to maximize the uniformity of irradiation on the capsule while keeping reasonable laser spot sizes. We created a 15-offset coordinate system with which an IDL program computes clearances, writes a file for QuickDraw 3D (QD3D) visualization, and writes input for the viewfactor code RAYNA IV. Visualizing and adjusting the parameters by eye gave more reliable results than computer optimization. QD3D images permitted quick live rotations to determine offsets. The clearances obtained insured safe operation and good physics. The viewfactor code computes the initial irradiation of the hohlraum and capsule or of a uniform hohlraum source with the loss through the four LEHs and shows a high degree of uniformity with both, better for lasers because this deposits more energy near the LEHs to compensate for the holes.

  3. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  4. U.S. Geological Survey: A synopsis of Three-dimensional Modeling

    USGS Publications Warehouse

    Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.

    2011-01-01

    The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.

  5. LANGUAGE LABORATORY RESEARCH STUDIES IN NEW YORK CITY HIGH SCHOOLS--A DISCUSSION OF THE PROGRAM AND THE FINDINGS.

    ERIC Educational Resources Information Center

    LORGE, SARAH W.

    TO INVESTIGATE THE EFFECTS OF THE LANGUAGE LABORATORY ON FOREIGN LANGUAGE LEARNING, THE BUREAU OF AUDIO-VISUAL INSTRUCTION OF NEW YORK CITY CONDUCTED EXPERIMENTS IN 1ST-, 2D-, AND 3D-YEAR HIGH SCHOOL CLASSES. THE FIRST EXPERIMENT, WHICH COMPARED CONVENTIONALLY TAUGHT CLASSES WITH GROUPS HAVING SOME LABORATORY TEACHING, SHOWED THAT GROUPS WITH…

  6. 4-D Visualization of Seismic and Geodetic Data of the Big Island of Hawai'i

    NASA Astrophysics Data System (ADS)

    Burstein, J. A.; Smith-Konter, B. R.; Aryal, A.

    2017-12-01

    For decades Hawai'i has served as a natural laboratory for studying complex interactions between magmatic and seismic processes. Investigating characteristics of these processes, as well as the crustal response to major Hawaiian earthquakes, requires a synthesis of seismic and geodetic data and models. Here, we present a 4-D visualization of the Big Island of Hawai'i that investigates geospatial and temporal relationships of seismicity, seismic velocity structure, and GPS crustal motions to known volcanic and seismically active features. Using the QPS Fledermaus visualization package, we compile 90 m resolution topographic data from NASA's Shuttle Radar Topography Mission (SRTM) and 50 m resolution bathymetric data from the Hawaiian Mapping Research Group (HMRG) with a high-precision earthquake catalog of more than 130,000 events from 1992-2009 [Matoza et al., 2013] and a 3-D seismic velocity model of Hawai'i [Lin et al., 2014] based on seismic data from the Hawaiian Volcano Observatory (HVO). Long-term crustal motion vectors are integrated into the visualization from HVO GPS time-series data. These interactive data sets reveal well-defined seismic structure near the summit areas of Mauna Loa and Kilauea volcanoes, where high Vp and high Vp/Vs anomalies at 5-12 km depth, as well as clusters of low magnitude (M < 3.5) seismicity, are observed. These areas of high Vp and high Vp/Vs are interpreted as mafic dike complexes and the surrounding seismic clusters are associated with shallow magma processes. GPS data are also used to help identify seismic clusters associated with the steady crustal detachment of the south flank of Kilauea's East Rift Zone. We also investigate the fault geometry of the 2006 M6.7 Kiholo Bay earthquake event by analyzing elastic dislocation deformation modeling results [Okada, 1985] and HVO GPS and seismic data of this event. We demonstrate the 3-D fault mechanisms of the Kiholo Bay main shock as a combination of strike-slip and dip-slip components (net slip 0.55 m) delineating a 30 km east-west striking, southward-dipping fault plane, occurring at 39 km depth. This visualization serves as a resource for advancing scientific analyses of Hawaiian seismic processes, as well as an interactive educational tool for demonstrating the geospatial and geophysical structure of the Big Island of Hawai'i.

  7. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  8. Structural-functional relationships between eye orbital imaging biomarkers and clinical visual assessments

    NASA Astrophysics Data System (ADS)

    Yao, Xiuya; Chaganti, Shikha; Nabar, Kunal P.; Nelson, Katrina; Plassard, Andrew; Harrigan, Rob L.; Mawn, Louise A.; Landman, Bennett A.

    2017-02-01

    Eye diseases and visual impairment affect millions of Americans and induce billions of dollars in annual economic burdens. Expounding upon existing knowledge of eye diseases could lead to improved treatment and disease prevention. This research investigated the relationship between structural metrics of the eye orbit and visual function measurements in a cohort of 470 patients from a retrospective study of ophthalmology records for patients (with thyroid eye disease, orbital inflammation, optic nerve edema, glaucoma, intrinsic optic nerve disease), clinical imaging, and visual function assessments. Orbital magnetic resonance imaging (MRI) and computed tomography (CT) images were retrieved and labeled in 3D using multi-atlas label fusion. Based on the 3D structures, both traditional radiology measures (e.g., Barrett index, volumetric crowding index, optic nerve length) and novel volumetric metrics were computed. Using stepwise regression, the associations between structural metrics and visual field scores (visual acuity, functional acuity, visual field, functional field, and functional vision) were assessed. Across all models, the explained variance was reasonable (R2 0.1-0.2) but highly significant (p < 0.001). Instead of analyzing a specific pathology, this study aimed to analyze data across a variety of pathologies. This approach yielded a general model for the connection between orbital structural imaging biomarkers and visual function.

  9. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  10. Working memory encoding delays top-down attention to visual cortex.

    PubMed

    Scalf, Paige E; Dux, Paul E; Marois, René

    2011-09-01

    The encoding of information from one event into working memory can delay high-level, central decision-making processes for subsequent events [e.g., Jolicoeur, P., & Dell'Acqua, R. The demonstration of short-term consolidation. Cognitive Psychology, 36, 138-202, 1998, doi:10.1006/cogp.1998.0684]. Working memory, however, is also believed to interfere with the deployment of top-down attention [de Fockert, J. W., Rees, G., Frith, C. D., & Lavie, N. The role of working memory in visual selective attention. Science, 291, 1803-1806, 2001, doi:10.1126/science.1056496]. It is, therefore, possible that, in addition to delaying central processes, the engagement of working memory encoding (WME) also postpones perceptual processing as well. Here, we tested this hypothesis with time-resolved fMRI by assessing whether WME serially postpones the action of top-down attention on low-level sensory signals. In three experiments, participants viewed a skeletal rapid serial visual presentation sequence that contained two target items (T1 and T2) separated by either a short (550 msec) or long (1450 msec) SOA. During single-target runs, participants attended and responded only to T1, whereas in dual-target runs, participants attended and responded to both targets. To determine whether T1 processing delayed top-down attentional enhancement of T2, we examined T2 BOLD response in visual cortex by subtracting the single-task waveforms from the dual-task waveforms for each SOA. When the WME demands of T1 were high (Experiments 1 and 3), T2 BOLD response was delayed at the short SOA relative to the long SOA. This was not the case when T1 encoding demands were low (Experiment 2). We conclude that encoding of a stimulus into working memory delays the deployment of attention to subsequent target representations in visual cortex.

  11. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    NASA Astrophysics Data System (ADS)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  12. Hands-on guide for 3D image creation for geological purposes

    NASA Astrophysics Data System (ADS)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red-cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  13. High-resolution three-dimensional visualization of the rat spinal cord microvasculature by synchrotron radiation micro-CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jianzhong; Cao, Yong; Wu, Tianding

    2014-10-15

    Purpose: Understanding the three-dimensional (3D) morphology of the spinal cord microvasculature has been limited by the lack of an effective high-resolution imaging technique. In this study, synchrotron radiation microcomputed tomography (SRµCT), a novel imaging technique based on absorption imaging, was evaluated with regard to the detection of the 3D morphology of the rat spinal cord microvasculature. Methods: Ten Sprague-Dawley rats were used in this ex vivo study. After contrast agent perfusion, their spinal cords were isolated and scanned using conventional x-rays, conventional micro-CT (CµCT), and SRµCT. Results: Based on contrast agent perfusion, the microvasculature of the rat spinal cord wasmore » clearly visualized for the first time ex vivo in 3D by means of SRµCT scanning. Compared to conventional imaging techniques, SRµCT achieved higher resolution 3D vascular imaging, with the smallest vessel that could be distinguished approximately 7.4 μm in diameter. Additionally, a 3D pseudocolored image of the spinal cord microvasculature was generated in a single session of SRµCT imaging, which was conducive to detailed observation of the vessel morphology. Conclusions: The results of this study indicated that SRµCT scanning could provide higher resolution images of the vascular network of the spinal cord. This modality also has the potential to serve as a powerful imaging tool for the investigation of morphology changes in the 3D angioarchitecture of the neurovasculature in preclinical research.« less

  14. Space Radiation Monitoring Center at SINP MSU

    NASA Astrophysics Data System (ADS)

    Kalegaev, Vladimir; Barinova, Wera; Barinov, Oleg; Bobrovnikov, Sergey; Dolenko, Sergey; Mukhametdinova, Ludmila; Myagkova, Irina; Nguen, Minh; Panasyuk, Mikhail; Shiroky, Vladimir; Shugay, Julia

    2015-04-01

    Data on energetic particle fluxes from Russian satellites have been collected in Space monitoring data center at Moscow State University in the near real-time mode. Web-portal http://smdc.sinp.msu.ru/ provides operational information on radiation state of the near-Earth space. Operational data are coming from space missions ELECTRO-L1, Meteor-M2. High-resolution data on energetic electron fluxes from MSU's satellite VERNOV with RELEC instrumentation on board are also available. Specific tools allow the visual representation of the satellite orbit in 3D space simultaneously with particle fluxes variations. Concurrent operational data coming from other spacecraft (ACE, GOES, SDO) and from the Earth's surface (geomagnetic indices) are used to represent geomagnetic and radiation state of near-Earth environment. Internet portal http://swx.sinp.msu.ru provides access to the actual data characterizing the level of solar activity, geomagnetic and radiation conditions in heliosphere and the Earth's magnetosphere in the real-time mode. Operational forecasting services automatically generate alerts on particle fluxes enhancements above the threshold values, both for SEP and relativistic electrons, using data from LEO and GEO orbits. The models of space environment working in autonomous mode are used to generalize the information obtained from different missions for the whole magnetosphere. On-line applications created on the base of these models provide short-term forecasting for SEP particles and relativistic electron fluxes at GEO and LEO, Dst and Kp indices online forecasting up to 1.5 hours ahead. Velocities of high-speed streams in solar wind on the Earth orbit are estimated with advance time of 3-4 days. Visualization system provides representation of experimental and modeling data in 2D and 3D.

  15. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  16. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  17. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  18. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  19. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  20. Appearance of bony lesions on 3-D CT reconstructions: a case study in variable renderings

    NASA Astrophysics Data System (ADS)

    Mankovich, Nicholas J.; White, Stuart C.

    1992-05-01

    This paper discusses conventional 3-D reconstruction for bone visualization and presents a case study to demonstrate the dangers of performing 3-D reconstructions without careful selection of the bone threshold. The visualization of midface bone lesions directly from axial CT images is difficult because of the complex anatomic relationships. Three-dimensional reconstructions made from the CT to provide graphic images showing lesions in relation to adjacent facial bones. Most commercially available 3-D image reconstruction requires that the radiologist or technologist identify a threshold image intensity value that can be used to distinguish bone from other tissues. Much has been made of the many disadvantages of this technique, but it continues as the predominant method in producing 3-D pictures for clinical use. This paper is intended to provide a clear demonstration for the physician of the caveats that should accompany 3-D reconstructions. We present a case of recurrent odontogenic keratocyst in the anterior maxilla where the 3-D reconstructions, made with different bone thresholds (windows), are compared to the resected specimen. A DMI 3200 computer was used to convert the scan data from a GE 9800 CT into a 3-D shaded surface image. Threshold values were assigned to (1) generate the most clinically pleasing image, (2) produce maximum theoretical fidelity (using the midpoint image intensity between average cortical bone and average soft tissue), and (3) cover stepped threshold intensities between these two methods. We compared the computer lesions with the resected specimen and noted measurement errors of up to 44 percent introduced by inappropriate bone threshold levels. We suggest clinically applicable standardization techniques in the 3-D reconstruction as well as cautionary language that should accompany the 3-D images.

Top