Sample records for underestimated 3d visualization

  1. Visual body size norms and the under‐detection of overweight and obesity

    PubMed Central

    Robinson, E.

    2017-01-01

    Summary Objectives The weight status of men with overweight and obesity tends to be visually underestimated, but visual recognition of female overweight and obesity has not been formally examined. The aims of the present studies were to test whether people can accurately recognize both male and female overweight and obesity and to examine a visual norm‐based explanation for why weight status is underestimated. Methods The present studies examine whether both male and female overweight and obesity are visually underestimated (Study 1), whether body size norms predict when underestimation of weight status occurs (Study 2) and whether visual exposure to heavier body weights adjusts visual body size norms and results in underestimation of weight status (Study 3). Results The weight status of men and women with overweight and obesity was consistently visually underestimated (Study 1). Body size norms predicted underestimation of weight status (Study 2) and in part explained why visual exposure to heavier body weights caused underestimation of overweight (Study 3). Conclusions The under‐detection of overweight and obesity may have been in part caused by exposure to larger body sizes resulting in an upwards shift in the range of body sizes that are perceived as being visually ‘normal’. PMID:29479462

  2. Validation of an inertial measurement unit for the measurement of jump count and height.

    PubMed

    MacDonald, Kerry; Bahr, Roald; Baltich, Jennifer; Whittaker, Jackie L; Meeuwisse, Willem H

    2017-05-01

    To validate the use of an inertial measurement unit (IMU) for the collection of total jump count and assess the validity of an IMU for the measurement of jump height against 3-D motion analysis. Cross sectional validation study. 3D motion-capture laboratory and field based settings. Thirteen elite adolescent volleyball players. Participants performed structured drills, played a 4 set volleyball match and performed twelve counter movement jumps. Jump counts from structured drills and match play were validated against visual count from recorded video. Jump height during the counter movement jumps was validated against concurrent 3-D motion-capture data. The IMU device captured more total jumps (1032) than visual inspection (977) during match play. During structured practice, device jump count sensitivity was strong (96.8%) while specificity was perfect (100%). The IMU underestimated jump height compared to 3D motion-capture with mean differences for maximal and submaximal jumps of 2.5 cm (95%CI: 1.3 to 3.8) and 4.1 cm (3.1-5.1), respectively. The IMU offers a valid measuring tool for jump count. Although the IMU underestimates maximal and submaximal jump height, our findings demonstrate its practical utility for field-based measurement of jump load. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. 3D Pathology Volumetric Technique: A Method for Calculating Breast Tumour Volume from Whole-Mount Serial Section Images

    PubMed Central

    Clarke, G. M.; Murray, M.; Holloway, C. M. B.; Liu, K.; Zubovits, J. T.; Yaffe, M. J.

    2012-01-01

    Tumour size, most commonly measured by maximum linear extent, remains a strong predictor of survival in breast cancer. Tumour volume, proportional to the number of tumour cells, may be a more accurate surrogate for size. We describe a novel “3D pathology volumetric technique” for lumpectomies and compare it with 2D measurements. Volume renderings and total tumour volume are computed from digitized whole-mount serial sections using custom software tools. Results are presented for two lumpectomy specimens selected for tumour features which may challenge accurate measurement of tumour burden with conventional, sampling-based pathology: (1) an infiltrative pattern admixed with normal breast elements; (2) a localized invasive mass separated from the in situ component by benign tissue. Spatial relationships between key features (tumour foci, close or involved margins) are clearly visualized in volume renderings. Invasive tumour burden can be underestimated using conventional pathology, compared to the volumetric technique (infiltrative pattern: 30% underestimation; localized mass: 3% underestimation for invasive tumour, 44% for in situ component). Tumour volume approximated from 2D measurements (i.e., maximum linear extent), assuming elliptical geometry, was seen to overestimate volume compared to the 3D volumetric calculation (by a factor of 7x for the infiltrative pattern; 1.5x for the localized invasive mass). PMID:23320179

  4. Visual and tactile length matching in spatial neglect.

    PubMed

    Bisiach, Edoardo; McIntosh, Robert D; Dijkerman, H Chris; McClements, Kevin I; Colombo, Mariarosa; Milner, A David

    2004-01-01

    Previous studies have shown that many patients with spatial neglect underestimate the horizontal extent of leftwardly located shapes (presented on screen or on paper) relative to rightwardly located shapes. This has been used to help explain their leftward biases in line bisection. In the present study we have tested patients with right hemisphere damage, either with or without neglect, on a comparable length matching task, but using 3-dimensional objects. The task was executed first visually without tactile contact, and second through touch without vision. In both sense modalities, we found that patients with neglect, but not those without, tended to underestimate leftward located objects relative to rightward located objects, differing significantly in this regard from healthy subjects. However these lateral biases were not as frequent or as pronounced as in previous studies using 2-D visual shapes. Despite the similar asymmetries in the two sense modalities, we found only a small correlation between them, and clear double dissociations were observed among our patients. We conclude that leftward length underestimation cannot be attributed to any one single cause. First it cannot be entirely due to impairments in the visual pathways, such as hemianopia and/or processing biases, since the disorder is also seen in the tactile modality. At the same time, however, length underestimation phenomena cannot be fully explained as a disruption of a supramodal central size processor, since they can occur in either vision or touch alone. Our data would fit best with a multiple-factor model in which some patients show leftward length underestimation for modality-specific reasons, while others do so due to a more high-level disruption of size judgements.

  5. In Vitro Validation of Real-Time Three-Dimensional Color Doppler Echocardiography for Direct Measurement of Proximal Isovelocity Surface Area in Mitral Regurgitation

    PubMed Central

    Little, Stephen H.; Igo, Stephen R.; Pirat, Bahar; McCulloch, Marti; Hartley, Craig J.; Nosé, Yukihiko; Zoghbi, William A.

    2012-01-01

    The 2-dimensional (2D) color Doppler (2D-CD) proximal isovelocity surface area (PISA) method assumes a hemispheric flow convergence zone to estimate transvalvular flow. Recently developed 3-dimensional (3D)-CD can directly visualize PISA shape and surface area without geometric assumptions. To validate a novel method to directly measure PISA using real-time 3D-CD echocardiography, a circulatory loop with an ultrasound imaging chamber was created to model mitral regurgitation (MR). Thirty-two different regurgitant flow conditions were tested using symmetric and asymmetric flow orifices. Three-dimensional–PISA was reconstructed from a hand-held real-time 3D-CD data set. Regurgitant volume was derived using both 2D-CD and 3D-CD PISA methods, and each was compared against a flowmeter standard. The circulatory loop achieved regurgitant volume within the clinical range of MR (11 to 84 ml). Three-dimensional–PISA geometry reflected the 2D geometry of the regurgitant orifice. Correlation between the 2D-PISA method regurgitant volume and actual regurgitant volume was significant (r2 = 0.47, p <0.001). Mean 2D-PISA regurgitant volume underestimate was 19.1 ± 25 ml (2 SDs). For the 3D-PISA method, correlation with actual regurgitant volume was significant (r2 = 0.92, p <0.001), with a mean regurgitant volume underestimate of 2.7 ± 10 ml (2 SDs). The 3D-PISA method showed less regurgitant volume underestimation for all orifice shapes and regurgitant volumes tested. In conclusion, in an in vitro model of MR, 3D-CD was used to directly measure PISA without geometric assumption. Compared with conventional 2D-PISA, regurgitant volume was more accurate when derived from 3D-PISA across symmetric and asymmetric orifices within a broad range of hemodynamic flow conditions. PMID:17493476

  6. Feasibility of 4D flow MR imaging of the brain with either Cartesian y-z radial sampling or k-t SENSE: comparison with 4D Flow MR imaging using SENSE.

    PubMed

    Sekine, Tetsuro; Amano, Yasuo; Takagi, Ryo; Matsumura, Yoshio; Murai, Yasuo; Kumita, Shinichiro

    2014-01-01

    A drawback of time-resolved 3-dimensional phase contrast magnetic resonance (4D Flow MR) imaging is its lengthy scan time for clinical application in the brain. We assessed the feasibility for flow measurement and visualization of 4D Flow MR imaging using Cartesian y-z radial sampling and that using k-t sensitivity encoding (k-t SENSE) by comparison with the standard scan using SENSE. Sixteen volunteers underwent 3 types of 4D Flow MR imaging of the brain using a 3.0-tesla scanner. As the standard scan, 4D Flow MR imaging with SENSE was performed first and then followed by 2 types of acceleration scan-with Cartesian y-z radial sampling and with k-t SENSE. We measured peak systolic velocity (PSV) and blood flow volume (BFV) in 9 arteries, and the percentage of particles arriving from the emitter plane at the target plane in 3 arteries, visually graded image quality in 9 arteries, and compared these quantitative and visual data between the standard scan and each acceleration scan. 4D Flow MR imaging examinations were completed in all but one volunteer, who did not undergo the last examination because of headache. Each acceleration scan reduced scan time by 50% compared with the standard scan. The k-t SENSE imaging underestimated PSV and BFV (P < 0.05). There were significant correlations for PSV and BFV between the standard scan and each acceleration scan (P < 0.01). The percentage of particles reaching the target plane did not differ between the standard scan and each acceleration scan. For visual assessment, y-z radial sampling deteriorated the image quality of the 3 arteries. Cartesian y-z radial sampling is feasible for measuring flow, and k-t SENSE offers sufficient flow visualization; both allow acquisition of 4D Flow MR imaging with shorter scan time.

  7. Validity of the Remote Food Photography Method against Doubly Labeled Water among Minority Preschoolers

    PubMed Central

    Nicklas, Theresa; Saab, Rabab; Islam, Noemi G.; Wong, William; Butte, Nancy; Schulin, Rebecca; Liu, Yan; Apolzan, John W.; Myers, Candice A.; Martin, Corby K.

    2017-01-01

    Objective To determine the validity of energy intake (EI) estimations made using the Remote Food Photography Method (RFPM) compared to the doubly-labeled water (DLW) method in minority preschool children in a free-living environment. Methods Seven days of food intake and spot urine samples excluding first void collections for DLW analysis were obtained on 39 3-to-5 year old Hispanic and African American children. Using an iPhone, caregivers captured before and after pictures of the child’s intake and pictures were wirelessly transmitted to trained raters who estimated portion size using existing visual estimation procedures and energy and macronutrients were calculated. Paired t-test, mean differences and Bland-Altman limits of agreement were performed. Results The mean EI using the RFPM was 1,191 ± 256 kcal/d and 1,412 ± 220 kcal/d by the DLW method, resulting in a mean underestimate of 222 kcal/d (−15.6%) (p<0.0001) that was consistent regardless of intake. The RFPM underestimated EI by −28.5% in 34 children and overestimated EI by 15.6% in 5 children. Conclusions The RFPM underestimated total EI when compared to the DLW method among preschoolers. Further refinement of the RFPM is needed for assessing EI of young children. PMID:28758370

  8. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  9. Numerosity underestimation with item similarity in dynamic visual display.

    PubMed

    Au, Ricky K C; Watanabe, Katsumi

    2013-01-01

    The estimation of numerosity of a large number of objects in a static visual display is possible even at short durations. Such coarse approximations of numerosity are distinct from subitizing, in which the number of objects can be reported with high precision when a small number of objects are presented simultaneously. The present study examined numerosity estimation of visual objects in dynamic displays and the effect of object similarity on numerosity estimation. In the basic paradigm (Experiment 1), two streams of dots were presented and observers were asked to indicate which of the two streams contained more dots. Streams consisting of dots that were identical in color were judged as containing fewer dots than streams where the dots were different colors. This underestimation effect for identical visual items disappeared when the presentation rate was slower (Experiment 1) or the visual display was static (Experiment 2). In Experiments 3 and 4, in addition to the numerosity judgment task, observers performed an attention-demanding task at fixation. Task difficulty influenced observers' precision in the numerosity judgment task, but the underestimation effect remained evident irrespective of task difficulty. These results suggest that identical or similar visual objects presented in succession might induce substitution among themselves, leading to an illusion that there are few items overall and that exploiting attentional resources does not eliminate the underestimation effect.

  10. Validity of the Remote Food Photography Method Against Doubly Labeled Water Among Minority Preschoolers.

    PubMed

    Nicklas, Theresa; Saab, Rabab; Islam, Noemi G; Wong, William; Butte, Nancy; Schulin, Rebecca; Liu, Yan; Apolzan, John W; Myers, Candice A; Martin, Corby K

    2017-09-01

    The aim of this study was to determine the validity of energy intake (EI) estimations made using the remote food photography method (RFPM) compared to the doubly labeled water (DLW) method in minority preschool children in a free-living environment. Seven days of food intake and spot urine samples excluding first void collections for DLW analysis were obtained on thirty-nine 3- to 5-year-old Hispanic and African American children. Using an iPhone, caregivers captured before and after pictures of each child's intake, pictures were wirelessly transmitted to trained raters who estimated portion size using existing visual estimation procedures, and energy and macronutrients were calculated. Paired t tests, mean differences, and Bland-Altman limits of agreement were performed. The mean EI was 1,191 ± 256 kcal/d using the RFPM and 1,412 ± 220 kcal/d using the DLW method, resulting in a mean underestimate of 222 kcal/d (-15.6%; P < 0.0001) that was consistent regardless of intake. The RFPM underestimated EI by -28.5% in 34 children and overestimated EI by 15.6% in 5 children. The RFPM underestimated total EI when compared to the DLW method among preschoolers. Further refinement of the RFPM is needed for assessing the EI of young children. © 2017 The Obesity Society.

  11. Quantification of Left Ventricular Linear, Areal and Volumetric Dimensions: A Phantom and in Vivo Comparison of 2-D and Real-Time 3-D Echocardiography with Cardiovascular Magnetic Resonance.

    PubMed

    Polte, Christian L; Lagerstrand, Kerstin M; Gao, Sinsia A; Lamm, Carl R; Bech-Hanssen, Odd

    2015-07-01

    Two-dimensional echocardiography and real-time 3-D echocardiography have been reported to underestimate human left ventricular volumes significantly compared with cardiovascular magnetic resonance. We investigated the ability of 2-D echocardiography, real-time 3-D echocardiography and cardiovascular magnetic resonance to delineate dimensions of increasing complexity (diameter-area-volume) in a multimodality phantom model and in vivo, with the aim of elucidating the main cause of underestimation. All modalities were able to delineate phantom dimensions with high precision. In vivo, 2-D and real-time 3-D echocardiography underestimated short-axis end-diastolic linear and areal and all left ventricular volumetric dimensions significantly compared with cardiovascular magnetic resonance, but not short-axis end-systolic linear and areal dimensions. Underestimation increased successively from linear to volumetric left ventricular dimensions. When analyzed according to the same principles, 2-D and real-time 3-DE echocardiography provided similar left ventricular volumes. In conclusion, echocardiographic underestimation of left ventricular dimensions is due mainly to inherent technical differences in the ability to differentiate trabeculated from compact myocardium. Identical endocardial border definition criteria are needed to minimize differences between the modalities and to ensure better comparability in clinical practice. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  12. Visual field progression in glaucoma: total versus pattern deviation analyses.

    PubMed

    Artes, Paul H; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-12-01

    To compare visual field progression with total and pattern deviation analyses in a prospective longitudinal study of patients with glaucoma and healthy control subjects. A group of 101 patients with glaucoma (168 eyes) with early to moderately advanced visual field loss at baseline (average mean deviation [MD], -3.9 dB) and no clinical evidence of media opacity were selected from a prospective longitudinal study on visual field progression in glaucoma. Patients were examined with static automated perimetry at 6-month intervals for a median follow-up of 9 years. At each test location, change was established with event and trend analyses of total and pattern deviation. The event analyses compared each follow-up test to a baseline obtained from averaging the first two tests, and visual field progression was defined as deterioration beyond the 5th percentile of test-retest variability at three test locations, observed on three consecutive tests. The trend analyses were based on point-wise linear regression, and visual field progression was defined as statistically significant deterioration (P < 5%) worse than -1 dB/year at three locations, confirmed by independently omitting the last and the penultimate observation. The incidence and the time-to-progression were compared between total and pattern deviation analyses. To estimate the specificity of the progression analyses, identical criteria were applied to visual fields obtained in 102 healthy control subjects, and the rate of visual field improvement was established in the patients with glaucoma and the healthy control subjects. With both event and trend methods, pattern deviation analyses classified approximately 15% fewer eyes as having progressed than did the total deviation analyses. In eyes classified as progressing by both the total and pattern deviation methods, total deviation analyses tended to detect progression earlier than the pattern deviation analyses. A comparison of the changes observed in MD and the visual fields' general height (estimated by the 85th percentile of the total deviation values) confirmed that change in the glaucomatous eyes almost always comprised a diffuse component. Pattern deviation analyses of progression may therefore underestimate the true amount of glaucomatous visual field progression. Pattern deviation analyses of visual field progression may underestimate visual field progression in glaucoma, particularly when there is no clinical evidence of increasing media opacity. Clinicians should have access to both total and pattern deviation analyses to make informed decisions on visual field progression in glaucoma.

  13. Underestimating numerosity of items in visual search tasks.

    PubMed

    Cassenti, Daniel N; Kelley, Troy D; Ghirardelli, Thomas G

    2010-10-01

    Previous research on numerosity judgments addressed attended items, while the present research addresses underestimation for unattended items in visual search tasks. One potential cause of underestimation for unattended items is that estimates of quantity may depend on viewing a large portion of the display within foveal vision. Another theory follows from the occupancy model: estimating quantity of items in greater proximity to one another increases the likelihood of an underestimation error. Three experimental manipulations addressed aspects of underestimation for unattended items: the size of the distracters, the distance of the target from fixation, and whether items were clustered together. Results suggested that the underestimation effect for unattended items was best explained within a Gestalt grouping framework.

  14. 3-D Lagrangian-based investigations of the time-dependent cloud cavitating flows around a Clark-Y hydrofoil with special emphasis on shedding process analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Huai-yu; Long, Xin-ping; Ji, Bin; Liu, Qi; Bai, Xiao-rui

    2018-02-01

    In the present paper, the unsteady cavitating flow around a 3-D Clark-Y hydrofoil is numerically investigated with the filter-based density correction model (FBDCM), a turbulence model and the Zwart-Gerber-Belamri (ZGB) cavitation model. A reasonable agreement is obtained between the numerical and experimental results. To study the complex flow structures more straightforwardly, a 3-D Lagrangian technology is developed, which can provide the particle tracks and the 3-D Lagrangian coherent structures (LCSs). Combined with the traditional methods based on the Eulerian viewpoint, this technology is used to analyze the attached cavity evolution and the re-entrant jet behavior in detail. At stage I, the collapse of the previous shedding cavity and the growth of a new attached cavity, the significant influence of the collapse both on the suction and pressure sides are captured quite well by the 3-D LCSs, which is underestimated by the traditional methods like the iso-surface of Q-criteria. As a kind of special LCSs, the arching LCSs are observed in the wake, induced by the counter-rotating vortexes. At stage II, with the development of the re-entrant jet, the influence of the cavitation on the pressure side is still not negligible. And with this 3-D Lagrangian technology, the tracks of the re-entrant jet are visualized clearly, moving from the trailing edge to the leading edge. Finally, at stage III, the re-entrant jet collides with the mainstream and finally induces the shedding. The cavitation evolution and the re-entrant jet movement in the whole cycle are well visualized with the 3-D Lagrangian technology. Moreover, the comparison between the LCSs obtained with 2-D and 3-D Lagrangian technologies indicates the advantages of the latter. It is demonstrated that the 3-D Lagrangian technology is a promising tool in the investigation of complex cavitating flows.

  15. Real-time three-dimensional color Doppler echocardiography for characterizing the spatial velocity distribution and quantifying the peak flow rate in the left ventricular outflow tract

    NASA Technical Reports Server (NTRS)

    Tsujino, H.; Jones, M.; Shiota, T.; Qin, J. X.; Greenberg, N. L.; Cardon, L. A.; Morehead, A. J.; Zetts, A. D.; Travaglini, A.; Bauer, F.; hide

    2001-01-01

    Quantification of flow with pulsed-wave Doppler assumes a "flat" velocity profile in the left ventricular outflow tract (LVOT), which observation refutes. Recent development of real-time, three-dimensional (3-D) color Doppler allows one to obtain an entire cross-sectional velocity distribution of the LVOT, which is not possible using conventional 2-D echo. In an animal experiment, the cross-sectional color Doppler images of the LVOT at peak systole were derived and digitally transferred to a computer to visualize and quantify spatial velocity distributions and peak flow rates. Markedly skewed profiles, with higher velocities toward the septum, were consistently observed. Reference peak flow rates by electromagnetic flow meter correlated well with 3-D peak flow rates (r = 0.94), but with an anticipated underestimation. Real-time 3-D color Doppler echocardiography was capable of determining cross-sectional velocity distributions and peak flow rates, demonstrating the utility of this new method for better understanding and quantifying blood flow phenomena.

  16. Saline tracer visualized with three-dimensional electrical resistivity tomography: Field-scale spatial moment analysis

    USGS Publications Warehouse

    Singha, Kamini; Gorelick, Steven M.

    2005-01-01

    Cross-well electrical resistivity tomography (ERT) was used to monitor the migration of a saline tracer in a two-well pumping-injection experiment conducted at the Massachusetts Military Reservation in Cape Cod, Massachusetts. After injecting 2200 mg/L of sodium chloride for 9 hours, ERT data sets were collected from four wells every 6 hours for 20 days. More than 180,000 resistance measurements were collected during the tracer test. Each ERT data set was inverted to produce a sequence of 3-D snapshot maps that track the plume. In addition to the ERT experiment a pumping test and an infiltration test were conducted to estimate horizontal and vertical hydraulic conductivity values. Using modified moment analysis of the electrical conductivity tomograms, the mass, center of mass, and spatial variance of the imaged tracer plume were estimated. Although the tomograms provide valuable insights into field-scale tracer migration behavior and aquifer heterogeneity, standard tomographic inversion and application of Archie's law to convert electrical conductivities to solute concentration results in underestimation of tracer mass. Such underestimation is attributed to (1) reduced measurement sensitivity to electrical conductivity values with distance from the electrodes and (2) spatial smoothing (regularization) from tomographic inversion. The center of mass estimated from the ERT inversions coincided with that given by migration of the tracer plume using 3-D advective-dispersion simulation. The 3-D plumes seen using ERT exhibit greater apparent dispersion than the simulated plumes and greater temporal spreading than observed in field data of concentration breakthrough at the pumping well.

  17. Feasibility of using an inversion-recovery ultrashort echo time (UTE) sequence for quantification of glenoid bone loss.

    PubMed

    Ma, Ya-Jun; West, Justin; Nazaran, Amin; Cheng, Xin; Hoenecke, Heinz; Du, Jiang; Chang, Eric Y

    2018-02-02

    To utilize the 3D inversion recovery prepared ultrashort echo time with cones readout (IR-UTE-Cones) MRI technique for direct imaging of lamellar bone with comparison to the gold standard of computed tomography (CT). CT and MRI was performed on 11 shoulder specimens and three patients. Five specimens had imaging performed before and after glenoid fracture (osteotomy). 2D and 3D volume-rendered CT images were reconstructed and conventional T1-weighted and 3D IR-UTE-Cones MRI techniques were performed. Glenoid widths and defects were independently measured by two readers using the circle method. Measurements were compared with those made from 3D CT datasets. Paired-sample Student's t tests and intraclass correlation coefficients were performed. In addition, 2D CT and 3D IR-UTE-Cones MRI datasets were linearly registered, digitally overlaid, and compared in consensus by these two readers. Compared with the reference standard (3D CT), glenoid bone diameter measurements made on 2D CT and 3D IR-UTE-Cones were not significantly different for either reader, whereas T1-weighted images underestimated the diameter (mean difference of 0.18 cm, p = 0.003 and 0.16 cm, p = 0.022 for readers 1 and 2, respectively). However, mean margin of error for measuring glenoid bone loss was small for all modalities (range, 1.46-3.92%). All measured ICCs were near perfect. Digitally registered 2D CT and 3D IR-UTE-Cones MRI datasets yielded essentially perfect congruity between the two modalities. The 3D IR-UTE-Cones MRI technique selectively visualizes lamellar bone, produces similar contrast to 2D CT imaging, and compares favorably to measurements made using 2D and 3D CT.

  18. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    PubMed

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  19. Contrast-enhanced time-resolved MRA for follow-up of intracranial aneurysms treated with the pipeline embolization device.

    PubMed

    Boddu, S R; Tong, F C; Dehkharghani, S; Dion, J E; Saindane, A M

    2014-01-01

    Endovascular reconstruction and flow diversion by using the Pipeline Embolization Device is an effective treatment for complex cerebral aneurysms. Accurate noninvasive alternatives to DSA for follow-up after Pipeline Embolization Device treatment are desirable. This study evaluated the accuracy of contrast-enhanced time-resolved MRA for this purpose, hypothesizing that contrast-enhanced time-resolved MRA will be comparable with DSA and superior to 3D-TOF MRA. During a 24-month period, 37 Pipeline Embolization Device-treated intracranial aneurysms in 26 patients underwent initial follow-up by using 3D-TOF MRA, contrast-enhanced time-resolved MRA, and DSA. MRA was performed on a 1.5T unit by using 3D-TOF and time-resolved imaging of contrast kinetics. All patients underwent DSA a median of 0 days (range, 0-68) after MRA. Studies were evaluated for aneurysm occlusion, quality of visualization of the reconstructed artery, and measurable luminal diameter of the Pipeline Embolization Device, with DSA used as the reference standard. The sensitivity, specificity, and positive and negative predictive values of contrast-enhanced time-resolved MRA relative to DSA for posttreatment aneurysm occlusion were 96%, 85%, 92%, and 92%. Contrast-enhanced time-resolved MRA demonstrated superior quality of visualization (P = .0001) and a higher measurable luminal diameter (P = .0001) of the reconstructed artery compared with 3D-TOF MRA but no significant difference compared with DSA. Contrast-enhanced time-resolved MRA underestimated the luminal diameter of the reconstructed artery by 0.965 ± 0.497 mm (27% ± 13%) relative to DSA. Contrast-enhanced time-resolved MRA is a reliable noninvasive method for monitoring intracranial aneurysms following flow diversion and vessel reconstruction by using the Pipeline Embolization Device. © 2014 by American Journal of Neuroradiology.

  20. [Examinations with the Cardiff Acuity Test].

    PubMed

    Gräf, M; Becker, R; Neff, A; Kaufmann, H

    1996-08-01

    Recently, a new preferential looking (PL) test has been presented for measuring visual acuity in infants and young children (Cardiff Acuity Test, CAT). The PL target is a schematic vanishing picture composed of isoluminant lines with different spatial orientations. Fifty-three healthy children (4-34 months, group 1), 28 (4-35 months) children at risk for amblyopia due to strabismus (group 2), 19 healthy subjects, and 157 patients (group 3) were tested with the CAT. In group 2 the CAT was compared with the fixation preference test. In group 3 the CAT was compared with a recognition test (Landolt C test). In group 1 the interocular difference of the CAT data was a maximum of 1 dB (70% 0 dB, 30% 1 dB, 1/3 so-called octave). Thus, an interocular difference of > 1 dB was considered to be suggestive of monocular or asymmetrical visual impairment. The maximum value 6/6 was frequently achieved (RE 44%, LE 36%, > 18 months RE 57%, LE 46%). In group 2 only 20% of the monolateral strabismic children showed an interocular difference > 1 dB in the CAT. In group 3 we found significant correlations between the CAT and Landolt acuity. A ratio of about 1.7/1 between CAT and Landolt acuity remained constant in cataract eyes as compared to healthy eyes. In amblyopic eyes due to strabismus this ratio was 3.7/1. Thus, amblyopia was underestimated with the CAT. Without limiting the examination distance, interocular differences > 1 dB in the CAT occurred in 52% of the strabismic amblyopic patients (potential sensitivity). At a distance of 1 m this rate decreased to 22% (real sensitivity). In conclusion, the CAT definitely lacks sensitivity for strabismic amblyopia. The data suggest that the real sensitivity could be improved by using higher spatial frequencies. The use of familiar shapes instead of gratings such as PL targets affects cooperation favorably in 12- to 36-month-old children.

  1. Matching optical flow to motor speed in virtual reality while running on a treadmill

    PubMed Central

    Lafortuna, Claudio L.; Mugellini, Elena; Abou Khaled, Omar

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed–i.e., treadmill’s speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care. PMID:29641564

  2. Matching optical flow to motor speed in virtual reality while running on a treadmill.

    PubMed

    Caramenti, Martina; Lafortuna, Claudio L; Mugellini, Elena; Abou Khaled, Omar; Bresciani, Jean-Pierre; Dubois, Amandine

    2018-01-01

    We investigated how visual and kinaesthetic/efferent information is integrated for speed perception in running. Twelve moderately trained to trained subjects ran on a treadmill at three different speeds (8, 10, 12 km/h) in front of a moving virtual scene. They were asked to match the visual speed of the scene to their running speed-i.e., treadmill's speed. For each trial, participants indicated whether the scene was moving slower or faster than they were running. Visual speed was adjusted according to their response using a staircase until the Point of Subjective Equality (PSE) was reached, i.e., until visual and running speed were perceived as equivalent. For all three running speeds, participants systematically underestimated the visual speed relative to their actual running speed. Indeed, the speed of the visual scene had to exceed the actual running speed in order to be perceived as equivalent to the treadmill speed. The underestimation of visual speed was speed-dependent, and percentage of underestimation relative to running speed ranged from 15% at 8km/h to 31% at 12km/h. We suggest that this fact should be taken into consideration to improve the design of attractive treadmill-mediated virtual environments enhancing engagement into physical activity for healthier lifestyles and disease prevention and care.

  3. Systems and Methods for Data Visualization Using Three-Dimensional Displays

    NASA Technical Reports Server (NTRS)

    Davidoff, Scott (Inventor); Djorgovski, Stanislav G. (Inventor); Estrada, Vicente (Inventor); Donalek, Ciro (Inventor)

    2017-01-01

    Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to: load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points; receive mappings of data dimensions to visualization attributes; determine the visualization attributes of the set of 3D objects based upon the selected mappings of data dimensions to 3D object attributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization attributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.

  4. Filling gaps in cultural heritage documentation by 3D photography

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.

    2015-08-01

    This contribution promotes 3D photography as an important tool to obtain objective object information. Keeping mainly in mind World Heritage documentation as well as Heritage protection, it is another intention of this paper, to stimulate the interest in applications of 3D photography for professionals as well as for amateurs. In addition this is also an activity report of the international CIPA task group 3. The main part of this paper starts with "Digging the treasure of existing international 3D photography". This does not only belong to tangible but also to intangible Cultural Heritage. 3D photography clearly supports the recording, the visualization, the preservation and the restoration of architectural and archaeological objects. Therefore the use of 3D photography in C.H. should increase on an international level. The presented samples in 3D represent a voluminous, almost partly "forgotten treasure" of international archives for 3D photography. The next chapter is on "Promoting new 3D photography in Cultural Heritage". Though 3D photographs are a well-established basic photographic and photogrammetric tool, even suited to provide "near real" documentation, they are still a matter of research and improvement. Beside the use of 3D cameras even single lenses cameras are very much suited for photographic 3D documentation purposes in Cultural Heritage. Currently at the Faculty of Civil Engineering of the University of Applied Sciences Magdeburg-Stendal, low altitude aerial photography is exposed from a maximum height of 13m, using a hand hold carbon telescope rod. The use of this "huge selfie stick" is also an (international) recommendation, to expose high resolution 3D photography of monuments under expedition conditions. In addition to the carbon rod recently a captive balloon and a hexacopter UAV- platform is in use, mainly to take better synoptically (extremely low altitude, ground truth) aerial photography. Additional experiments with respect to "easy geometry" and to multistage concepts of 3D photographs in Cultural Heritage just started. Furthermore a revised list of the 3D visualization principles, claiming completeness, has been carried out. Beside others in an outlook *It is highly recommended, to list every historical and current stereo view with relevance to Cultural Heritage in a global Monument Information System (MIS), like in google earth. *3D photographs seem to be very suited, to complete and/or at least partly to replace manual archaeological sketches. In this concern the still underestimated 3D effect will be demonstrated, which even allows, e.g., the spatial perception of extremely small scratches etc... *A consequent dealing with 3D Technology even seems to indicate, currently we experience the beginning of a new age of "real 3DPC- screens", which at least could add or even partly replace the conventional 2D screens. Here the spatial visualization is verified without glasses in an all-around vitreous body. In this respect nowadays widespread lasered crystals showing monuments are identified as "Early Bird" 3D products, which, due to low resolution and contrast and due to lack of color, currently might even remember to the status of the invention of photography by Niepce (1827), but seem to promise a great future also in 3D Cultural Heritage documentation. *Last not least 3D printers more and more seem to conquer the IT-market, obviously showing an international competition.

  5. [Spatial characteristics analysis of Huizhou-Styled Village based on ideal ecosystem model and 3D landscape indices: A case in Chengkan, China].

    PubMed

    Yao, Meng Yuan; Yan, Shi Jiang; Wu, Yan Lan

    2016-12-01

    Huizhou-Styled Village is a typical representative of the traditional Chinese ancient villages. It preserves plentiful information of the regional culture and ecological connotation. The Huizhou-Style is the apotheosis of harmony between the Chinese ancient people and nature. The research and protection of Huizhou-Styled Village plays a very important role in fields of ecology, geography, architecture and esthetics. This paper took Chengkan Village of Anhui Province as an exa-mple, and proposed a new model of ideal ecosystem oriented in theories of Feng-shui and psychological field. The new method of characterizing 3D landscape index was introduced to explore the spatial patterns of Huizhou-Styled Village and the functionality of the composited landscape components in a quantitative way. The results indicated that, Chengkan Village showed a spatially composited pattern of "mountain-forest-village-river-forest". It formed an ideal settlement ring structure with human architecture in the center and natural landscape around in the horizontal and vertical horizons. The traditional method based on the projection distance caused the deviation of the landscape index, such as underestimating the area and distance of landscape patch. The 3D landscape index of average patch area was 6.7% higher than the 2D landscape index. The increasing rate ofarea proportion in 3D index was 1.0% higher than that of 2D index in forest lands. Area proportion of the other landscapes decreased, especially the artificial landscapes like construction and cropland landscapes. The area and perimeter metric were underestimated, whereas the shape metric and the diversity metric were overestimated. This caused the underestimation of the dominance of natural patches was underestimated and the overestimation of the dominance of artificial patches during the analysis of landscape pattern. The 3D landscape index showed that the natural elements and their combination in Chengkan Village ecosystem reflected better ecological function, the key elements and the composited landscape ecosystem preserved higher stability, connectivity and aggregation. The quantitative confirmation showed that the Huizhou-Styled Village represented by Chengkan Village is an ideal ecosystem.

  6. Accuracy and efficiency of computer-aided anatomical analysis using 3D visualization software based on semi-automated and automated segmentations.

    PubMed

    An, Gao; Hong, Li; Zhou, Xiao-Bing; Yang, Qiong; Li, Mei-Qing; Tang, Xiang-Yang

    2017-03-01

    We investigated and compared the functionality of two 3D visualization software provided by a CT vendor and a third-party vendor, respectively. Using surgical anatomical measurement as baseline, we evaluated the accuracy of 3D visualization and verified their utility in computer-aided anatomical analysis. The study cohort consisted of 50 adult cadavers fixed with the classical formaldehyde method. The computer-aided anatomical analysis was based on CT images (in DICOM format) acquired by helical scan with contrast enhancement, using a CT vendor provided 3D visualization workstation (Syngo) and a third-party 3D visualization software (Mimics) that was installed on a PC. Automated and semi-automated segmentations were utilized in the 3D visualization workstation and software, respectively. The functionality and efficiency of automated and semi-automated segmentation methods were compared. Using surgical anatomical measurement as a baseline, the accuracy of 3D visualization based on automated and semi-automated segmentations was quantitatively compared. In semi-automated segmentation, the Mimics 3D visualization software outperformed the Syngo 3D visualization workstation. No significant difference was observed in anatomical data measurement by the Syngo 3D visualization workstation and the Mimics 3D visualization software (P>0.05). Both the Syngo 3D visualization workstation provided by a CT vendor and the Mimics 3D visualization software by a third-party vendor possessed the needed functionality, efficiency and accuracy for computer-aided anatomical analysis. Copyright © 2016 Elsevier GmbH. All rights reserved.

  7. Perception of Stand-on-ability: Do Geographical Slants Feel Steeper Than They Look?

    PubMed

    Hajnal, Alen; Wagman, Jeffrey B; Doyon, Jonathan K; Clark, Joseph D

    2016-07-01

    Past research has shown that haptically perceived surface slant by foot is matched with visually perceived slant by a factor of 0.81. Slopes perceived visually appear shallower than when stood on without looking. We sought to identify the sources of this discrepancy by asking participants to judge whether they would be able to stand on an inclined ramp. In the first experiment, visual perception was compared to pedal perception in which participants took half a step with one foot onto an occluded ramp. Visual perception closely matched the actual maximal slope angle that one could stand on, whereas pedal perception underestimated it. Participants may have been less stable in the pedal condition while taking half a step onto the ramp. We controlled for this by having participants hold onto a sturdy tripod in the pedal condition (Experiment 2). This did not eliminate the difference between visual and haptic perception, but repeating the task while sitting on a chair did (Experiment 3). Beyond balance requirements, pedal perception may also be constrained by the limited range of motion at the ankle and knee joints while standing. Indeed, when we restricted range of motion by wearing an ankle brace pedal perception underestimated the affordance (Experiment 4). Implications for ecological theory were offered by discussing the notion of functional equivalence and the role of exploration in perception. © The Author(s) 2016.

  8. Foggy perception slows us down.

    PubMed

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-10-30

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog-that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system.DOI:http://dx.doi.org/10.7554/eLife.00031.001.

  9. Experimental evidence for improved neuroimaging interpretation using three-dimensional graphic models.

    PubMed

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more precisely than classical cross-sectional images based on a two dimensional (2D) approach. Eighty participants were assigned to each experimental condition: 2D cross-sectional visualization vs. 3D volumetric visualization. Both groups were matched for age, gender, visual-spatial ability, and previous knowledge of neuroanatomy. Accuracy in identifying brain structures, execution time, and level of confidence in the response were taken as outcome measures. Moreover, interactive effects between the experimental conditions (2D vs. 3D) and factors such as level of competence (novice vs. expert), image modality (morphological and functional), and difficulty of the structures were analyzed. The percentage of correct answers (hit rate) and level of confidence in responses were significantly higher in the 3D visualization condition than in the 2D. In addition, the response time was significantly lower for the 3D visualization condition in comparison with the 2D. The interaction between the experimental condition (2D vs. 3D) and difficulty was significant, and the 3D condition facilitated the location of difficult images more than the 2D condition. 3D volumetric visualization helps to identify brain structures such as the hippocampus and amygdala, more accurately and rapidly than conventional 2D visualization. This paper discusses the implications of these results with regards to the learning process involved in neuroimaging interpretation. Copyright © 2012 American Association of Anatomists.

  10. A Bayesian model to correct underestimated 3-D wind speeds from sonic anemometers increases turbulent components of the surface energy balance

    Treesearch

    John M. Frank; William J. Massman; Brent E. Ewers

    2016-01-01

    Sonic anemometers are the principal instruments in micrometeorological studies of turbulence and ecosystem fluxes. Common designs underestimate vertical wind measurements because they lack a correction for transducer shadowing, with no consensus on a suitable correction. We reanalyze a subset of data collected during field experiments in 2011 and 2013 featuring two or...

  11. Spatial Disorientation in Gondola Centrifuges Predicted by the Form of Motion as a Whole in 3-D

    PubMed Central

    Holly, Jan E.; Harmon, Katharine J.

    2009-01-01

    INTRODUCTION During a coordinated turn, subjects can misperceive tilts. Subjects accelerating in tilting-gondola centrifuges without external visual reference underestimate the roll angle, and underestimate more when backward-facing than when forward-facing. In addition, during centrifuge deceleration, the perception of pitch can include tumble while paradoxically maintaining a fixed perceived pitch angle. The goal of the present research was to test two competing hypotheses: (1) that components of motion are perceived relatively independently and then combined to form a three-dimensional perception, and (2) that perception is governed by familiarity of motions as a whole in three dimensions, with components depending more strongly on the overall shape of the motion. METHODS Published experimental data were used from existing tilting-gondola centrifuge studies. The two hypotheses were implemented formally in computer models, and centrifuge acceleration and deceleration were simulated. RESULTS The second, whole-motion oriented, hypothesis better predicted subjects' perceptions, including the forward-backward asymmetry and the paradoxical tumble upon deceleration. Important was the predominant stimulus at the beginning of the motion as well as the familiarity of centripetal acceleration. CONCLUSION Three-dimensional perception is better predicted by taking into account familiarity with the form of three-dimensional motion. PMID:19198199

  12. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    NASA Astrophysics Data System (ADS)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  13. New software for 3D fracture network analysis and visualization

    NASA Astrophysics Data System (ADS)

    Song, J.; Noh, Y.; Choi, Y.; Um, J.; Hwang, S.

    2013-12-01

    This study presents new software to perform analysis and visualization of the fracture network system in 3D. The developed software modules for the analysis and visualization, such as BOUNDARY, DISK3D, FNTWK3D, CSECT and BDM, have been developed using Microsoft Visual Basic.NET and Visualization TookKit (VTK) open-source library. Two case studies revealed that each module plays a role in construction of analysis domain, visualization of fracture geometry in 3D, calculation of equivalent pipes, production of cross-section map and management of borehole data, respectively. The developed software for analysis and visualization of the 3D fractured rock mass can be used to tackle the geomechanical problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  14. Systematic distortions of perceptual stability investigated using immersive virtual reality

    PubMed Central

    Tcheang, Lili; Gilson, Stuart J.; Glennerster, Andrew

    2010-01-01

    Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers under-estimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an under-estimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. PMID:15845248

  15. Foggy perception slows us down

    PubMed Central

    Pretto, Paolo; Bresciani, Jean-Pierre; Rainer, Gregor; Bülthoff, Heinrich H

    2012-01-01

    Visual speed is believed to be underestimated at low contrast, which has been proposed as an explanation of excessive driving speed in fog. Combining psychophysics measurements and driving simulation, we confirm that speed is underestimated when contrast is reduced uniformly for all objects of the visual scene independently of their distance from the viewer. However, we show that when contrast is reduced more for distant objects, as is the case in real fog, visual speed is actually overestimated, prompting drivers to decelerate. Using an artificial anti-fog—that is, fog characterized by better visibility for distant than for close objects, we demonstrate for the first time that perceived speed depends on the spatial distribution of contrast over the visual scene rather than the global level of contrast per se. Our results cast new light on how reduced visibility conditions affect perceived speed, providing important insight into the human visual system. DOI: http://dx.doi.org/10.7554/eLife.00031.001 PMID:23110253

  16. The Impact of Interactivity on Comprehending 2D and 3D Visualizations of Movement Data.

    PubMed

    Amini, Fereshteh; Rufiange, Sebastien; Hossain, Zahid; Ventura, Quentin; Irani, Pourang; McGuffin, Michael J

    2015-01-01

    GPS, RFID, and other technologies have made it increasingly common to track the positions of people and objects over time as they move through two-dimensional spaces. Visualizing such spatio-temporal movement data is challenging because each person or object involves three variables (two spatial variables as a function of the time variable), and simply plotting the data on a 2D geographic map can result in overplotting and occlusion that hides details. This also makes it difficult to understand correlations between space and time. Software such as GeoTime can display such data with a three-dimensional visualization, where the 3rd dimension is used for time. This allows for the disambiguation of spatially overlapping trajectories, and in theory, should make the data clearer. However, previous experimental comparisons of 2D and 3D visualizations have so far found little advantage in 3D visualizations, possibly due to the increased complexity of navigating and understanding a 3D view. We present a new controlled experimental comparison of 2D and 3D visualizations, involving commonly performed tasks that have not been tested before, and find advantages in 3D visualizations for more complex tasks. In particular, we tease out the effects of various basic interactions and find that the 2D view relies significantly on "scrubbing" the timeline, whereas the 3D view relies mainly on 3D camera navigation. Our work helps to improve understanding of 2D and 3D visualizations of spatio-temporal data, particularly with respect to interactivity.

  17. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  18. Evaluation of stereoscopic display with visual function and interview

    NASA Astrophysics Data System (ADS)

    Okuyama, Fumio

    1999-05-01

    The influence of binocular stereoscopic (3D) television display on the human eye were compared with one of a 2D display, using human visual function testing and interviews. A 40- inch double lenticular display was used for 2D/3D comparison experiments. Subjects observed the display for 30 minutes at a distance 1.0 m, with a combination of 2D material and one of 3D material. The participants were twelve young adults. Main optometric test with visual function measured were visual acuity, refraction, phoria, near vision point, accommodation etc. The interview consisted of 17 questions. Testing procedures were performed just before watching, just after watching, and forty-five minutes after watching. Changes in visual function are characterized as prolongation of near vision point, decrease of accommodation and increase in phoria. 3D viewing interview results show much more visual fatigue in comparison with 2D results. The conclusions are: 1) change in visual function is larger and visual fatigue is more intense when viewing 3D images. 2) The evaluation method with visual function and interview proved to be very satisfactory for analyzing the influence of stereoscopic display on human eye.

  19. 3D Scientific Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  20. GenomeD3Plot: a library for rich, interactive visualizations of genomic data in web applications.

    PubMed

    Laird, Matthew R; Langille, Morgan G I; Brinkman, Fiona S L

    2015-10-15

    A simple static image of genomes and associated metadata is very limiting, as researchers expect rich, interactive tools similar to the web applications found in the post-Web 2.0 world. GenomeD3Plot is a light weight visualization library written in javascript using the D3 library. GenomeD3Plot provides a rich API to allow the rapid visualization of complex genomic data using a convenient standards based JSON configuration file. When integrated into existing web services GenomeD3Plot allows researchers to interact with data, dynamically alter the view, or even resize or reposition the visualization in their browser window. In addition GenomeD3Plot has built in functionality to export any resulting genome visualization in PNG or SVG format for easy inclusion in manuscripts or presentations. GenomeD3Plot is being utilized in the recently released Islandviewer 3 (www.pathogenomics.sfu.ca/islandviewer/) to visualize predicted genomic islands with other genome annotation data. However, its features enable it to be more widely applicable for dynamic visualization of genomic data in general. GenomeD3Plot is licensed under the GNU-GPL v3 at https://github.com/brinkmanlab/GenomeD3Plot/. brinkman@sfu.ca. © The Author 2015. Published by Oxford University Press.

  1. On the Usability and Usefulness of 3d (geo)visualizations - a Focus on Virtual Reality Environments

    NASA Astrophysics Data System (ADS)

    Çöltekin, A.; Lokka, I.; Zahner, M.

    2016-06-01

    Whether and when should we show data in 3D is an on-going debate in communities conducting visualization research. A strong opposition exists in the information visualization (Infovis) community, and seemingly unnecessary/unwarranted use of 3D, e.g., in plots, bar or pie charts, is heavily criticized. The scientific visualization (Scivis) community, on the other hand, is more supportive of the use of 3D as it allows `seeing' invisible phenomena, or designing and printing things that are used in e.g., surgeries, educational settings etc. Geographic visualization (Geovis) stands between the Infovis and Scivis communities. In geographic information science, most visuo-spatial analyses have been sufficiently conducted in 2D or 2.5D, including analyses related to terrain and much of the urban phenomena. On the other hand, there has always been a strong interest in 3D, with similar motivations as in Scivis community. Among many types of 3D visualizations, a popular one that is exploited both for visual analysis and visualization is the highly realistic (geo)virtual environments. Such environments may be engaging and memorable for the viewers because they offer highly immersive experiences. However, it is not yet well-established if we should opt to show the data in 3D; and if yes, a) what type of 3D we should use, b) for what task types, and c) for whom. In this paper, we identify some of the central arguments for and against the use of 3D visualizations around these three considerations in a concise interdisciplinary literature review.

  2. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  3. 3D Exploration of Meteorological Data: Facing the challenges of operational forecasters

    NASA Astrophysics Data System (ADS)

    Koutek, Michal; Debie, Frans; van der Neut, Ian

    2016-04-01

    In the past years the Royal Netherlands Meteorological Institute (KNMI) has been working on innovation in the field of meteorological data visualization. We are dealing with Numerical Weather Prediction (NWP) model data and observational data, i.e. satellite images, precipitation radar, ground and air-borne measurements. These multidimensional multivariate data are geo-referenced and can be combined in 3D space to provide more intuitive views on the atmospheric phenomena. We developed the Weather3DeXplorer (W3DX), a visualization framework for processing and interactive exploration and visualization using Virtual Reality (VR) technology. We managed to have great successes with research studies on extreme weather situations. In this paper we will elaborate what we have learned from application of interactive 3D visualization in the operational weather room. We will explain how important it is to control the degrees-of-freedom during interaction that are given to the users: forecasters/scientists; (3D camera and 3D slicing-plane navigation appear to be rather difficult for the users, when not implemented properly). We will present a novel approach of operational 3D visualization user interfaces (UI) that for a great deal eliminates the obstacle and the time it usually takes to set up the visualization parameters and an appropriate camera view on a certain atmospheric phenomenon. We have found our inspiration in the way our operational forecasters work in the weather room. We decided to form a bridge between 2D visualization images and interactive 3D exploration. Our method combines WEB-based 2D UI's, pre-rendered 3D visualization catalog for the latest NWP model runs, with immediate entry into interactive 3D session for selected visualization setting. Finally, we would like to present the first user experiences with this approach.

  4. FROMS3D: New Software for 3-D Visualization of Fracture Network System in Fractured Rock Masses

    NASA Astrophysics Data System (ADS)

    Noh, Y. H.; Um, J. G.; Choi, Y.

    2014-12-01

    A new software (FROMS3D) is presented to visualize fracture network system in 3-D. The software consists of several modules that play roles in management of borehole and field fracture data, fracture network modelling, visualization of fracture geometry in 3-D and calculation and visualization of intersections and equivalent pipes between fractures. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. The results have suggested that the developed software is effective in visualizing 3-D fracture network system, and can provide useful information to tackle the engineering geological problems related to strength, deformability and hydraulic behaviors of the fractured rock masses.

  5. On the comparison of visual discomfort generated by S3D and 2D content based on eye-tracking features

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2014-03-01

    The changing of TV systems from 2D to 3D mode is the next expected step in the telecommunication world. Some works have already been done to perform this progress technically, but interaction of the third dimension with humans is not yet clear. Previously, it was found that any increased load of visual system can create visual fatigue, like prolonged TV watching, computer work or video gaming. But watching S3D can cause another nature of visual fatigue, since all S3D technologies creates illusion of the third dimension based on characteristics of binocular vision. In this work we propose to evaluate and compare the visual fatigue from watching 2D and S3D content. This work shows the difference in accumulation of visual fatigue and its assessment for two types of content. In order to perform this comparison eye-tracking experiments using six commercially available movies were conducted. Healthy naive participants took part into the test and gave their answers feeling the subjective evaluation. It was found that watching stereo 3D content induce stronger feeling of visual fatigue than conventional 2D, and the nature of video has an important effect on its increase. Visual characteristics obtained by using eye-tracking were investigated regarding their relation with visual fatigue.

  6. Singaporean Mothers’ Perception of Their Three-year-old Child’s Weight Status: A Cross-Sectional Study

    PubMed Central

    Cheung, Yin Bun; Chan, Jerry Kok Yen; Tint, Mya Thway; Godfrey, Keith M.; Gluckman, Peter D.; Kwek, Kenneth; Saw, Seang Mei; Chong, Yap-Seng; Lee, Yung Seng; Yap, Fabian; Lek, Ngee

    2016-01-01

    Objective Inaccurate parental perception of their child’s weight status is commonly reported in Western countries. It is unclear whether similar misperception exists in Asian populations. This study aimed to evaluate the ability of Singaporean mothers to accurately describe their three-year-old child’s weight status verbally and visually. Methods At three years post-delivery, weight and height of the children were measured. Body mass index (BMI) was calculated and converted into actual weight status using International Obesity Task Force criteria. The mothers were blinded to their child’s measurements and asked to verbally and visually describe what they perceived was their child’s actual weight status. Agreement between actual and described weight status was assessed using Cohen’s Kappa statistic (κ). Results Of 1237 recruited participants, 66.4% (n = 821) with complete data on mothers’ verbal and visual perceptions and children’s anthropometric measurements were analysed. Nearly thirty percent of the mothers were unable to describe their child’s weight status accurately. In verbal description, 17.9% under-estimated and 11.8% over-estimated their child’s weight status. In visual description, 10.4% under-estimated and 19.6% over-estimated their child’s weight status. Many mothers of underweight children over-estimated (verbal 51.6%; visual 88.8%), and many mothers of overweight and obese children under-estimated (verbal 82.6%; visual 73.9%), their child’s weight status. In contrast, significantly fewer mothers of normal-weight children were inaccurate (verbal 16.8%; visual 8.8%). Birth order (p<0.001), maternal (p = 0.004) and child’s weight status (p<0.001) were associated with consistently inaccurate verbal and visual descriptions. Conclusions Singaporean mothers, especially those of underweight and overweight children, may not be able to perceive their young child’s weight status accurately. To facilitate prevention of childhood obesity, educating parents and caregivers about their child’s weight status is needed. PMID:26820665

  7. Planning, implementation and optimization of future space missions using an immersive visualization environment (IVE) machine

    NASA Astrophysics Data System (ADS)

    Nathan Harris, E.; Morgenthaler, George W.

    2004-07-01

    Beginning in 1995, a team of 3-D engineering visualization experts assembled at the Lockheed Martin Space Systems Company and began to develop innovative virtual prototyping simulation tools for performing ground processing and real-time visualization of design and planning of aerospace missions. At the University of Colorado, a team of 3-D visualization experts also began developing the science of 3-D visualization and immersive visualization at the newly founded British Petroleum (BP) Center for visualization, which began operations in October, 2001. BP acquired ARCO in the year 2000 and awarded the 3-D flexible IVE developed by ARCO (beginning in 1990) to the University of Colorado, CU, the winner in a competition among 6 Universities. CU then hired Dr. G. Dorn, the leader of the ARCO team as Center Director, and the other experts to apply 3-D immersive visualization to aerospace and to other University Research fields, while continuing research on surface interpretation of seismic data and 3-D volumes. This paper recounts further progress and outlines plans in Aerospace applications at Lockheed Martin and CU.

  8. Immersive Visualization of the Solid Earth

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers unique benefits for the visual analysis of complex three-dimensional data such as tomographic images of the mantle and higher-dimensional data such as computational geodynamics models of mantle convection or even planetary dynamos. Unlike "traditional" visualization, which has to project 3D scalar data or vectors onto a 2D screen for display, VR can display 3D data in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection and interfere with interpretation. As a result, researchers can apply their spatial reasoning skills to 3D data in the same way they can to real objects or environments, as well as to complex objects like vector fields. 3D Visualizer is an application to visualize 3D volumetric data, such as results from mantle convection simulations or seismic tomography reconstructions, using VR display technology and a strong focus on interactive exploration. Unlike other visualization software, 3D Visualizer does not present static visualizations, such as a set of cross-sections at pre-selected positions and orientations, but instead lets users ask questions of their data, for example by dragging a cross-section through the data's domain with their hands and seeing data mapped onto that cross-section in real time, or by touching a point inside the data domain, and immediately seeing an isosurface connecting all points having the same data value as the touched point. Combined with tools allowing 3D measurements of positions, distances, and angles, and with annotation tools that allow free-hand sketching directly in 3D data space, the outcome of using 3D Visualizer is not primarily a set of pictures, but derived data to be used for subsequent analysis. 3D Visualizer works best in virtual reality, either in high-end facility-scale environments such as CAVEs, or using commodity low-cost virtual reality headsets such as HTC's Vive. The recent emergence of high-quality commodity VR means that researchers can buy a complete VR system off the shelf, install it and the 3D Visualizer software themselves, and start using it for data analysis immediately.

  9. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  10. Five-dimensional ultrasound system for soft tissue visualization.

    PubMed

    Deshmukh, Nishikant P; Caban, Jesus J; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2015-12-01

    A five-dimensional ultrasound (US) system is proposed as a real-time pipeline involving fusion of 3D B-mode data with the 3D ultrasound elastography (USE) data as well as visualization of these fused data and a real-time update capability over time for each consecutive scan. 3D B-mode data assist in visualizing the anatomy of the target organ, and 3D elastography data adds strain information. We investigate the feasibility of such a system and show that an end-to-end real-time system, from acquisition to visualization, can be developed. We present a system that consists of (a) a real-time 3D elastography algorithm based on a normalized cross-correlation (NCC) computation on a GPU; (b) real-time 3D B-mode acquisition and network transfer; (c) scan conversion of 3D elastography and B-mode volumes (if acquired by 4D wobbler probe); and (d) visualization software that fuses, visualizes, and updates 3D B-mode and 3D elastography data in real time. We achieved a speed improvement of 4.45-fold for the threaded version of the NCC-based 3D USE versus the non-threaded version. The maximum speed was 79 volumes/s for 3D scan conversion. In a phantom, we validated the dimensions of a 2.2-cm-diameter sphere scan-converted to B-mode volume. Also, we validated the 5D US system visualization transfer function and detected 1- and 2-cm spherical objects (phantom lesion). Finally, we applied the system to a phantom consisting of three lesions to delineate the lesions from the surrounding background regions of the phantom. A 5D US system is achievable with real-time performance. We can distinguish between hard and soft areas in a phantom using the transfer functions.

  11. Direct measurement of proximal isovelocity surface area by real-time three-dimensional color Doppler for quantitation of aortic regurgitant volume: an in vitro validation.

    PubMed

    Pirat, Bahar; Little, Stephen H; Igo, Stephen R; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J; Zoghbi, William A

    2009-03-01

    The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2pi r(2), and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA x aliasing velocity x time velocity integral of AR/peak AR velocity. Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 +/- 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption.

  12. Investigation of visual fatigue/discomfort generated by S3D video using eye-tracking data

    NASA Astrophysics Data System (ADS)

    Iatsun, Iana; Larabi, Mohamed-Chaker; Fernandez-Maloigne, Christine

    2013-03-01

    Stereoscopic 3D is undoubtedly one of the most attractive content. It has been deployed intensively during the last decade through movies and games. Among the advantages of 3D are the strong involvement of viewers and the increased feeling of presence. However, the sanitary e ects that can be generated by 3D are still not precisely known. For example, visual fatigue and visual discomfort are among symptoms that an observer may feel. In this paper, we propose an investigation of visual fatigue generated by 3D video watching, with the help of eye-tracking. From one side, a questionnaire, with the most frequent symptoms linked with 3D, is used in order to measure their variation over time. From the other side, visual characteristics such as pupil diameter, eye movements ( xations and saccades) and eye blinking have been explored thanks to data provided by the eye-tracker. The statistical analysis showed an important link between blinking duration and number of saccades with visual fatigue while pupil diameter and xations are not precise enough and are highly dependent on content. Finally, time and content play an important role in the growth of visual fatigue due to 3D watching.

  13. The effects of stereo disparity on the behavioural and electrophysiological correlates of perception of audio-visual motion in depth.

    PubMed

    Harrison, Neil R; Witheridge, Sian; Makin, Alexis; Wuerger, Sophie M; Pegna, Alan J; Meyer, Georg F

    2015-11-01

    Motion is represented by low-level signals, such as size-expansion in vision or loudness changes in the auditory modality. The visual and auditory signals from the same object or event may be integrated and facilitate detection. We explored behavioural and electrophysiological correlates of congruent and incongruent audio-visual depth motion in conditions where auditory level changes, visual expansion, and visual disparity cues were manipulated. In Experiment 1 participants discriminated auditory motion direction whilst viewing looming or receding, 2D or 3D, visual stimuli. Responses were faster and more accurate for congruent than for incongruent audio-visual cues, and the congruency effect (i.e., difference between incongruent and congruent conditions) was larger for visual 3D cues compared to 2D cues. In Experiment 2, event-related potentials (ERPs) were collected during presentation of the 2D and 3D, looming and receding, audio-visual stimuli, while participants detected an infrequent deviant sound. Our main finding was that audio-visual congruity was affected by retinal disparity at an early processing stage (135-160ms) over occipito-parietal scalp. Topographic analyses suggested that similar brain networks were activated for the 2D and 3D congruity effects, but that cortical responses were stronger in the 3D condition. Differences between congruent and incongruent conditions were observed between 140-200ms, 220-280ms, and 350-500ms after stimulus onset. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Virtual reality on the web: the potentials of different methodologies and visualization techniques for scientific research and medical education.

    PubMed

    Kling-Petersen, T; Pascher, R; Rydmark, M

    1999-01-01

    Academic and medical imaging are increasingly using computer based 3D reconstruction and/or visualization. Three-dimensional interactive models play a major role in areas such as preclinical medical education, clinical visualization and medical research. While 3D is comparably easy to do on a high end workstations, distribution and use of interactive 3D graphics necessitate the use of personal computers and the web. Several new techniques have been demonstrated providing interactive 3D via a web browser thereby allowing a limited version of VR to be experienced by a larger majority of students, medical practitioners and researchers. These techniques include QuickTimeVR2 (QTVR), VRML2, QuickDraw3D, OpenGL and Java3D. In order to test the usability of the different techniques, Mednet have initiated a number of projects designed to evaluate the potentials of 3D techniques for scientific reporting, clinical visualization and medical education. These include datasets created by manual tracing followed by triangulation, smoothing and 3D visualization, MRI or high-resolution laserscanning. Preliminary results indicate that both VRML and QTVR fulfills most of the requirements of web based, interactive 3D visualization, whereas QuickDraw3D is too limited. Presently, the JAVA 3D has not yet reached a level where in depth testing is possible. The use of high-resolution laserscanning is an important addition to 3D digitization.

  15. Discrepancy between self-assessed hearing status and measured audiometric evaluation

    PubMed Central

    Kim, So Young; Kim, Hyung-Jong; Kim, Min-Su; Park, Bumjung; Kim, Jin-Hwan

    2017-01-01

    Objective The purpose of this study was to examine the difference between self-reported hearing status and hearing impairment assessed using conventional audiometry. The associated factors were examined when a concordance between self-reported hearing and audiometric measures was lacking. Methods In total, 19,642 individuals ≥20 years of age who participated in the Korea National Health and Nutrition Examination Surveys conducted from 2009 through 2012 were enrolled. Pure-tone hearing threshold audiometry (PTA) was measured and classified into three levels: <25 dB (normal hearing); ≥25 dB <40 dB (mild hearing impairment); and ≥40 dB (moderate-to-severe hearing impairment). The self-reported hearing loss was categorized into 3 categories. The participants were categorized into three groups: the concordance (matched between self-reported hearing loss and audiometric PTA), overestimation (higher self-reported hearing loss compared to audiometric PTA), and underestimation groups (lower self-reported hearing loss compared to audiometric PTA). The associations of age, sex, education level, stress level, anxiety/depression, tympanic membrane (TM) status, hearing aid use, and tinnitus with the discrepancy between the hearing self-reported hearing loss and audiometric pure tone threshold results were analyzed using multinomial logistic regression analysis with complex sampling. Results Overall, 80.1%, 7.1%, and 12.8% of the participants were assigned to the concordance, overestimation, and underestimation groups, respectively. Older age (adjusted odds ratios [AORs] = 1.28 [95% confidence interval = 1.19–1.37] and 2.80 [2.62–2.99] for the overestimation and the underestimation groups, respectively), abnormal TM (2.17 [1.46–3.23] and 1.59 [1.17–2.15]), and tinnitus (2.44 [2.10–2.83] and 1.61 [1.38–1.87]) were positively correlated with both the overestimation and underestimation groups. Compared with specialized workers, service workers, manual workers, and the unemployed were more likely to be in the overestimation group (1.48 [1.11–1.98], 1.39 [1.04–1.86], and 1.50 [1.18–1.90], respectively), and service workers were more likely to be in the underestimation group (AOR = 1.42 [1.01–1.99]). Higher education level (0.77 [0.59–1.01] and 0.43 [0.33–0.57]) and hearing aid use (0.36 [0.17–0.77] and 0.23 [0.13–0.43]) were negatively associated with being in the underestimation group (0.43 [0.37–0.50]). Compared with males, females were less likely to be assigned to the underestimation group (0.43 [0.37–0.50]). Stress (1.98 [1.32–2.98]) and anxiety/depression (1.30 [1.06–1.59]) were associated with overestimation group. Conclusion Older age, lower education level, occupation, abnormal TM, non-hearing aid use, and tinnitus were related to both overestimation and underestimation groups. Male gender was related to underestimation, and stress and anxiety/depression were correlated with overestimation group. An understanding of these factors associated with the self-reported hearing loss will be instrumental to identifying and managing hearing-impaired individuals. PMID:28792529

  16. Discrepancy between self-assessed hearing status and measured audiometric evaluation.

    PubMed

    Kim, So Young; Kim, Hyung-Jong; Kim, Min-Su; Park, Bumjung; Kim, Jin-Hwan; Choi, Hyo Geun

    2017-01-01

    The purpose of this study was to examine the difference between self-reported hearing status and hearing impairment assessed using conventional audiometry. The associated factors were examined when a concordance between self-reported hearing and audiometric measures was lacking. In total, 19,642 individuals ≥20 years of age who participated in the Korea National Health and Nutrition Examination Surveys conducted from 2009 through 2012 were enrolled. Pure-tone hearing threshold audiometry (PTA) was measured and classified into three levels: <25 dB (normal hearing); ≥25 dB <40 dB (mild hearing impairment); and ≥40 dB (moderate-to-severe hearing impairment). The self-reported hearing loss was categorized into 3 categories. The participants were categorized into three groups: the concordance (matched between self-reported hearing loss and audiometric PTA), overestimation (higher self-reported hearing loss compared to audiometric PTA), and underestimation groups (lower self-reported hearing loss compared to audiometric PTA). The associations of age, sex, education level, stress level, anxiety/depression, tympanic membrane (TM) status, hearing aid use, and tinnitus with the discrepancy between the hearing self-reported hearing loss and audiometric pure tone threshold results were analyzed using multinomial logistic regression analysis with complex sampling. Overall, 80.1%, 7.1%, and 12.8% of the participants were assigned to the concordance, overestimation, and underestimation groups, respectively. Older age (adjusted odds ratios [AORs] = 1.28 [95% confidence interval = 1.19-1.37] and 2.80 [2.62-2.99] for the overestimation and the underestimation groups, respectively), abnormal TM (2.17 [1.46-3.23] and 1.59 [1.17-2.15]), and tinnitus (2.44 [2.10-2.83] and 1.61 [1.38-1.87]) were positively correlated with both the overestimation and underestimation groups. Compared with specialized workers, service workers, manual workers, and the unemployed were more likely to be in the overestimation group (1.48 [1.11-1.98], 1.39 [1.04-1.86], and 1.50 [1.18-1.90], respectively), and service workers were more likely to be in the underestimation group (AOR = 1.42 [1.01-1.99]). Higher education level (0.77 [0.59-1.01] and 0.43 [0.33-0.57]) and hearing aid use (0.36 [0.17-0.77] and 0.23 [0.13-0.43]) were negatively associated with being in the underestimation group (0.43 [0.37-0.50]). Compared with males, females were less likely to be assigned to the underestimation group (0.43 [0.37-0.50]). Stress (1.98 [1.32-2.98]) and anxiety/depression (1.30 [1.06-1.59]) were associated with overestimation group. Older age, lower education level, occupation, abnormal TM, non-hearing aid use, and tinnitus were related to both overestimation and underestimation groups. Male gender was related to underestimation, and stress and anxiety/depression were correlated with overestimation group. An understanding of these factors associated with the self-reported hearing loss will be instrumental to identifying and managing hearing-impaired individuals.

  17. [Three-dimensional morphological modeling and visualization of wheat root system].

    PubMed

    Tan, Feng; Tang, Liang; Hu, Jun-Cheng; Jiang, Hai-Yan; Cao, Wei-Xing; Zhu, Yan

    2011-01-01

    Crop three-dimensional (3D) morphological modeling and visualization is an important part of digital plant study. This paper aimed to develop a 3D morphological model of wheat root system based on the parameters of wheat root morphological features, and to realize the visualization of wheat root growth. According to the framework of visualization technology for wheat root growth, a 3D visualization model of wheat root axis, including root axis growth model, branch geometric model, and root axis curve model, was developed firstly. Then, by integrating root topology, the corresponding pixel was determined, and the whole wheat root system was three-dimensionally re-constructed by using the morphological feature parameters in the root morphological model. Finally, based on the platform of OpenGL, and by integrating the technologies of texture mapping, lighting rendering, and collision detection, the 3D visualization of wheat root growth was realized. The 3D output of wheat root system from the model was vivid, which could realize the 3D root system visualization of different wheat cultivars under different water regimes and nitrogen application rates. This study could lay a technical foundation for further development of an integral visualization system of wheat plant.

  18. Presentation of laboratory test results in patient portals: influence of interface design on risk interpretation and visual search behaviour.

    PubMed

    Fraccaro, Paolo; Vigo, Markel; Balatsoukas, Panagiotis; van der Veer, Sabine N; Hassan, Lamiece; Williams, Richard; Wood, Grahame; Sinha, Smeeta; Buchan, Iain; Peek, Niels

    2018-02-12

    Patient portals are considered valuable instruments for self-management of long term conditions, however, there are concerns over how patients might interpret and act on the clinical information they access. We hypothesized that visual cues improve patients' abilities to correctly interpret laboratory test results presented through patient portals. We also assessed, by applying eye-tracking methods, the relationship between risk interpretation and visual search behaviour. We conducted a controlled study with 20 kidney transplant patients. Participants viewed three different graphical presentations in each of low, medium, and high risk clinical scenarios composed of results for 28 laboratory tests. After viewing each clinical scenario, patients were asked how they would have acted in real life if the results were their own, as a proxy of their risk interpretation. They could choose between: 1) Calling their doctor immediately (high interpreted risk); 2) Trying to arrange an appointment within the next 4 weeks (medium interpreted risk); 3) Waiting for the next appointment in 3 months (low interpreted risk). For each presentation, we assessed accuracy of patients' risk interpretation, and employed eye tracking to assess and compare visual search behaviour. Misinterpretation of risk was common, with 65% of participants underestimating the need for action across all presentations at least once. Participants found it particularly difficult to interpret medium risk clinical scenarios. Participants who consistently understood when action was needed showed a higher visual search efficiency, suggesting a better strategy to cope with information overload that helped them to focus on the laboratory tests most relevant to their condition. This study confirms patients' difficulties in interpreting laboratories test results, with many patients underestimating the need for action, even when abnormal values were highlighted or grouped together. Our findings raise patient safety concerns and may limit the potential of patient portals to actively involve patients in their own healthcare.

  19. MPI CyberMotion Simulator: implementation of a novel motion simulator to investigate multisensory path integration in three dimensions.

    PubMed

    Barnett-Cowan, Michael; Meilinger, Tobias; Vidal, Manuel; Teufel, Harald; Bülthoff, Heinrich H

    2012-05-10

    Path integration is a process in which self-motion is integrated over time to obtain an estimate of one's current position relative to a starting point (1). Humans can do path integration based exclusively on visual (2-3), auditory (4), or inertial cues (5). However, with multiple cues present, inertial cues - particularly kinaesthetic - seem to dominate (6-7). In the absence of vision, humans tend to overestimate short distances (<5 m) and turning angles (<30°), but underestimate longer ones (5). Movement through physical space therefore does not seem to be accurately represented by the brain. Extensive work has been done on evaluating path integration in the horizontal plane, but little is known about vertical movement (see (3) for virtual movement from vision alone). One reason for this is that traditional motion simulators have a small range of motion restricted mainly to the horizontal plane. Here we take advantage of a motion simulator (8-9) with a large range of motion to assess whether path integration is similar between horizontal and vertical planes. The relative contributions of inertial and visual cues for path navigation were also assessed. 16 observers sat upright in a seat mounted to the flange of a modified KUKA anthropomorphic robot arm. Sensory information was manipulated by providing visual (optic flow, limited lifetime star field), vestibular-kinaesthetic (passive self motion with eyes closed), or visual and vestibular-kinaesthetic motion cues. Movement trajectories in the horizontal, sagittal and frontal planes consisted of two segment lengths (1st: 0.4 m, 2nd: 1 m; ±0.24 m/s(2) peak acceleration). The angle of the two segments was either 45° or 90°. Observers pointed back to their origin by moving an arrow that was superimposed on an avatar presented on the screen. Observers were more likely to underestimate angle size for movement in the horizontal plane compared to the vertical planes. In the frontal plane observers were more likely to overestimate angle size while there was no such bias in the sagittal plane. Finally, observers responded slower when answering based on vestibular-kinaesthetic information alone. Human path integration based on vestibular-kinaesthetic information alone thus takes longer than when visual information is present. That pointing is consistent with underestimating and overestimating the angle one has moved through in the horizontal and vertical planes respectively, suggests that the neural representation of self-motion through space is non-symmetrical which may relate to the fact that humans experience movement mostly within the horizontal plane.

  20. Integration of real-time 3D capture, reconstruction, and light-field display

    NASA Astrophysics Data System (ADS)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  1. Visual completion from 2D cross-sections: Implications for visual theory and STEM education and practice.

    PubMed

    Gagnier, Kristin Michod; Shipley, Thomas F

    2016-01-01

    Accurately inferring three-dimensional (3D) structure from only a cross-section through that structure is not possible. However, many observers seem to be unaware of this fact. We present evidence for a 3D amodal completion process that may explain this phenomenon and provide new insights into how the perceptual system processes 3D structures. Across four experiments, observers viewed cross-sections of common objects and reported whether regions visible on the surface extended into the object. If they reported that the region extended, they were asked to indicate the orientation of extension or that the 3D shape was unknowable from the cross-section. Across Experiments 1, 2, and 3, participants frequently inferred 3D forms from surface views, showing a specific prior to report that regions in the cross-section extend straight back into the object, with little variance in orientation. In Experiment 3, we examined whether 3D visual inferences made from cross-sections are similar to other cases of amodal completion by examining how the inferences were influenced by observers' knowledge of the objects. Finally, in Experiment 4, we demonstrate that these systematic visual inferences are unlikely to result from demand characteristics or response biases. We argue that these 3D visual inferences have been largely unrecognized by the perception community, and have implications for models of 3D visual completion and science education.

  2. FPV: fast protein visualization using Java 3D.

    PubMed

    Can, Tolga; Wang, Yujun; Wang, Yuan-Fang; Su, Jianwen

    2003-05-22

    Many tools have been developed to visualize protein structures. Tools that have been based on Java 3D((TM)) are compatible among different systems and they can be run remotely through web browsers. However, using Java 3D for visualization has some performance issues with it. The primary concerns about molecular visualization tools based on Java 3D are in their being slow in terms of interaction speed and in their inability to load large molecules. This behavior is especially apparent when the number of atoms to be displayed is huge, or when several proteins are to be displayed simultaneously for comparison. In this paper we present techniques for organizing a Java 3D scene graph to tackle these problems. We have developed a protein visualization system based on Java 3D and these techniques. We demonstrate the effectiveness of the proposed method by comparing the visualization component of our system with two other Java 3D based molecular visualization tools. In particular, for van der Waals display mode, with the efficient organization of the scene graph, we could achieve up to eight times improvement in rendering speed and could load molecules three times as large as the previous systems could. EPV is freely available with source code at the following URL: http://www.cs.ucsb.edu/~tcan/fpv/

  3. Usefulness of 3-dimensional stereotactic surface projection FDG PET images for the diagnosis of dementia

    PubMed Central

    Kim, Jahae; Cho, Sang-Geon; Song, Minchul; Kang, Sae-Ryung; Kwon, Seong Young; Choi, Kang-Ho; Choi, Seong-Min; Kim, Byeong-Chae; Song, Ho-Chun

    2016-01-01

    Abstract To compare diagnostic performance and confidence of a standard visual reading and combined 3-dimensional stereotactic surface projection (3D-SSP) results to discriminate between Alzheimer disease (AD)/mild cognitive impairment (MCI), dementia with Lewy bodies (DLB), and frontotemporal dementia (FTD). [18F]fluorodeoxyglucose (FDG) PET brain images were obtained from 120 patients (64 AD/MCI, 38 DLB, and 18 FTD) who were clinically confirmed over 2 years follow-up. Three nuclear medicine physicians performed the diagnosis and rated diagnostic confidence twice; once by standard visual methods, and once by adding of 3D-SSP. Diagnostic performance and confidence were compared between the 2 methods. 3D-SSP showed higher sensitivity, specificity, accuracy, positive, and negative predictive values to discriminate different types of dementia compared with the visual method alone, except for AD/MCI specificity and FTD sensitivity. Correction of misdiagnosis after adding 3D-SSP images was greatest for AD/MCI (56%), followed by DLB (13%) and FTD (11%). Diagnostic confidence also increased in DLB (visual: 3.2; 3D-SSP: 4.1; P < 0.001), followed by AD/MCI (visual: 3.1; 3D-SSP: 3.8; P = 0.002) and FTD (visual: 3.5; 3D-SSP: 4.2; P = 0.022). Overall, 154/360 (43%) cases had a corrected misdiagnosis or improved diagnostic confidence for the correct diagnosis. The addition of 3D-SSP images to visual analysis helped to discriminate different types of dementia in FDG PET scans, by correcting misdiagnoses and enhancing diagnostic confidence in the correct diagnosis. Improvement of diagnostic accuracy and confidence by 3D-SSP images might help to determine the cause of dementia and appropriate treatment. PMID:27930593

  4. The role of three-dimensional visualization in robotics-assisted cardiac surgery

    NASA Astrophysics Data System (ADS)

    Currie, Maria; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W. A.; Patel, Rajni; Peters, Terry; Kiaii, Bob

    2012-02-01

    Objectives: The purpose of this study was to determine the effect of three-dimensional (3D) versus two-dimensional (2D) visualization on the amount of force applied to mitral valve tissue during robotics-assisted mitral valve annuloplasty, and the time to perform the procedure in an ex vivo animal model. In addition, we examined whether these effects are consistent between novices and experts in robotics-assisted cardiac surgery. Methods: A cardiac surgery test-bed was constructed to measure forces applied by the da Vinci surgical system (Intuitive Surgical, Sunnyvale, CA) during mitral valve annuloplasty. Both experts and novices completed roboticsassisted mitral valve annuloplasty with 2D and 3D visualization. Results: The mean time for both experts and novices to suture the mitral valve annulus and to tie sutures using 3D visualization was significantly less than that required to suture the mitral valve annulus and to tie sutures using 2D vision (p∠0.01). However, there was no significant difference in the maximum force applied by novices to the mitral valve during suturing (p = 0.3) and suture tying (p = 0.6) using either 2D or 3D visualization. Conclusion: This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery. Keywords: Robotics-assisted surgery, visualization, cardiac surgery

  5. Visualization techniques to aid in the analysis of multi-spectral astrophysical data sets

    NASA Technical Reports Server (NTRS)

    Domik, Gitta; Alam, Salim; Pinkney, Paul

    1992-01-01

    This report describes our project activities for the period Sep. 1991 - Oct. 1992. Our activities included stabilizing the software system STAR, porting STAR to IDL/widgets (improved user interface), targeting new visualization techniques for multi-dimensional data visualization (emphasizing 3D visualization), and exploring leading-edge 3D interface devices. During the past project year we emphasized high-end visualization techniques, by exploring new tools offered by state-of-the-art visualization software (such as AVS3 and IDL4/widgets), by experimenting with tools still under research at the Department of Computer Science (e.g., use of glyphs for multidimensional data visualization), and by researching current 3D input/output devices as they could be used to explore 3D astrophysical data. As always, any project activity is driven by the need to interpret astrophysical data more effectively.

  6. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D

    PubMed Central

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron

    2017-01-01

    Abstract Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. PMID:28814063

  7. iCAVE: an open source tool for visualizing biomolecular networks in 3D, stereoscopic 3D and immersive 3D.

    PubMed

    Liluashvili, Vaja; Kalayci, Selim; Fluder, Eugene; Wilson, Manda; Gabow, Aaron; Gümüs, Zeynep H

    2017-08-01

    Visualizations of biomolecular networks assist in systems-level data exploration in many cellular processes. Data generated from high-throughput experiments increasingly inform these networks, yet current tools do not adequately scale with concomitant increase in their size and complexity. We present an open source software platform, interactome-CAVE (iCAVE), for visualizing large and complex biomolecular interaction networks in 3D. Users can explore networks (i) in 3D using a desktop, (ii) in stereoscopic 3D using 3D-vision glasses and a desktop, or (iii) in immersive 3D within a CAVE environment. iCAVE introduces 3D extensions of known 2D network layout, clustering, and edge-bundling algorithms, as well as new 3D network layout algorithms. Furthermore, users can simultaneously query several built-in databases within iCAVE for network generation or visualize their own networks (e.g., disease, drug, protein, metabolite). iCAVE has modular structure that allows rapid development by addition of algorithms, datasets, or features without affecting other parts of the code. Overall, iCAVE is the first freely available open source tool that enables 3D (optionally stereoscopic or immersive) visualizations of complex, dense, or multi-layered biomolecular networks. While primarily designed for researchers utilizing biomolecular networks, iCAVE can assist researchers in any field. © The Authors 2017. Published by Oxford University Press.

  8. Attention and Visual Motor Integration in Young Children with Uncorrected Hyperopia.

    PubMed

    Kulp, Marjean Taylor; Ciner, Elise; Maguire, Maureen; Pistilli, Maxwell; Candy, T Rowan; Ying, Gui-Shuang; Quinn, Graham; Cyert, Lynn; Moore, Bruce

    2017-10-01

    Among 4- and 5-year-old children, deficits in measures of attention, visual-motor integration (VMI) and visual perception (VP) are associated with moderate, uncorrected hyperopia (3 to 6 diopters [D]) accompanied by reduced near visual function (near visual acuity worse than 20/40 or stereoacuity worse than 240 seconds of arc). To compare attention, visual motor, and visual perceptual skills in uncorrected hyperopes and emmetropes attending preschool or kindergarten and evaluate their associations with visual function. Participants were 4 and 5 years of age with either hyperopia (≥3 to ≤6 D, astigmatism ≤1.5 D, anisometropia ≤1 D) or emmetropia (hyperopia ≤1 D; astigmatism, anisometropia, and myopia each <1 D), without amblyopia or strabismus. Examiners masked to refractive status administered tests of attention (sustained, receptive, and expressive), VMI, and VP. Binocular visual acuity, stereoacuity, and accommodative accuracy were also assessed at near. Analyses were adjusted for age, sex, race/ethnicity, and parent's/caregiver's education. Two hundred forty-four hyperopes (mean, +3.8 ± [SD] 0.8 D) and 248 emmetropes (+0.5 ± 0.5 D) completed testing. Mean sustained attention score was worse in hyperopes compared with emmetropes (mean difference, -4.1; P < .001 for 3 to 6 D). Mean Receptive Attention score was worse in 4 to 6 D hyperopes compared with emmetropes (by -2.6, P = .01). Hyperopes with reduced near visual acuity (20/40 or worse) had worse scores than emmetropes (-6.4, P < .001 for sustained attention; -3.0, P = .004 for Receptive Attention; -0.7, P = .006 for VMI; -1.3, P = .008 for VP). Hyperopes with stereoacuity of 240 seconds of arc or worse scored significantly worse than emmetropes (-6.7, P < .001 for sustained attention; -3.4, P = .03 for Expressive Attention; -2.2, P = .03 for Receptive Attention; -0.7, P = .01 for VMI; -1.7, P < .001 for VP). Overall, hyperopes with better near visual function generally performed similarly to emmetropes. Moderately hyperopic children were found to have deficits in measures of attention. Hyperopic children with reduced near visual function also had lower scores on VMI and VP than emmetropic children.

  9. Three-dimensional versus two-dimensional ultrasound for assessing levonorgestrel intrauterine device location: A pilot study.

    PubMed

    Andrade, Carla Maria Araujo; Araujo Júnior, Edward; Torloni, Maria Regina; Moron, Antonio Fernandes; Guazzelli, Cristina Aparecida Falbo

    2016-02-01

    To compare the rates of success of two-dimensional (2D) and three-dimensional (3D) sonographic (US) examinations in locating and adequately visualizing levonorgestrel intrauterine devices (IUDs) and to explore factors associated with the unsuccessful viewing on 2D US. Transvaginal 2D and 3D US examinations were performed on all patients 1 month after insertion of levonorgestrel IUDs. The devices were considered adequately visualized on 2D US if both the vertical (shadow, upper and lower extremities) and the horizontal (two echogenic lines) shafts were identified. 3D volumes were also captured to assess the location of levonorgestrel IUDs on 3D US. Thirty women were included. The rates of adequate device visualization were 40% on 2D US (95% confidence interval [CI], 24.6; 57.7) and 100% on 3D US (95% CI, 88.6; 100.0). The device was not adequately visualized in all six women who had a retroflexed uterus, but it was adequately visualized in 12 of the 24 women (50%) who had a nonretroflexed uterus (95% CI, -68.6; -6.8). We found that 3D US is better than 2D US for locating and adequately visualizing levonorgestrel IUDs. Other well-designed studies with adequate power should be conducted to confirm this finding. © 2015 Wiley Periodicals, Inc.

  10. Underestimation of Variance of Predicted Health Utilities Derived from Multiattribute Utility Instruments.

    PubMed

    Chan, Kelvin K W; Xie, Feng; Willan, Andrew R; Pullenayegum, Eleanor M

    2017-04-01

    Parameter uncertainty in value sets of multiattribute utility-based instruments (MAUIs) has received little attention previously. This false precision leads to underestimation of the uncertainty of the results of cost-effectiveness analyses. The aim of this study is to examine the use of multiple imputation as a method to account for this uncertainty of MAUI scoring algorithms. We fitted a Bayesian model with random effects for respondents and health states to the data from the original US EQ-5D-3L valuation study, thereby estimating the uncertainty in the EQ-5D-3L scoring algorithm. We applied these results to EQ-5D-3L data from the Commonwealth Fund (CWF) Survey for Sick Adults ( n = 3958), comparing the standard error of the estimated mean utility in the CWF population using the predictive distribution from the Bayesian mixed-effect model (i.e., incorporating parameter uncertainty in the value set) with the standard error of the estimated mean utilities based on multiple imputation and the standard error using the conventional approach of using MAUI (i.e., ignoring uncertainty in the value set). The mean utility in the CWF population based on the predictive distribution of the Bayesian model was 0.827 with a standard error (SE) of 0.011. When utilities were derived using the conventional approach, the estimated mean utility was 0.827 with an SE of 0.003, which is only 25% of the SE based on the full predictive distribution of the mixed-effect model. Using multiple imputation with 20 imputed sets, the mean utility was 0.828 with an SE of 0.011, which is similar to the SE based on the full predictive distribution. Ignoring uncertainty of the predicted health utilities derived from MAUIs could lead to substantial underestimation of the variance of mean utilities. Multiple imputation corrects for this underestimation so that the results of cost-effectiveness analyses using MAUIs can report the correct degree of uncertainty.

  11. Evaluation of the 3d Urban Modelling Capabilities in Geographical Information Systems

    NASA Astrophysics Data System (ADS)

    Dogru, A. O.; Seker, D. Z.

    2010-12-01

    Geographical Information System (GIS) Technology, which provides successful solutions to basic spatial problems, is currently widely used in 3 dimensional (3D) modeling of physical reality with its developing visualization tools. The modeling of large and complicated phenomenon is a challenging problem in terms of computer graphics currently in use. However, it is possible to visualize that phenomenon in 3D by using computer systems. 3D models are used in developing computer games, military training, urban planning, tourism and etc. The use of 3D models for planning and management of urban areas is very popular issue of city administrations. In this context, 3D City models are produced and used for various purposes. However the requirements of the models vary depending on the type and scope of the application. While a high level visualization, where photorealistic visualization techniques are widely used, is required for touristy and recreational purposes, an abstract visualization of the physical reality is generally sufficient for the communication of the thematic information. The visual variables, which are the principle components of cartographic visualization, such as: color, shape, pattern, orientation, size, position, and saturation are used for communicating the thematic information. These kinds of 3D city models are called as abstract models. Standardization of technologies used for 3D modeling is now available by the use of CityGML. CityGML implements several novel concepts to support interoperability, consistency and functionality. For example it supports different Levels-of-Detail (LoD), which may arise from independent data collection processes and are used for efficient visualization and efficient data analysis. In one CityGML data set, the same object may be represented in different LoD simultaneously, enabling the analysis and visualization of the same object with regard to different degrees of resolution. Furthermore, two CityGML data sets containing the same object in different LoD may be combined and integrated. In this study GIS tools used for 3D modeling issues were examined. In this context, the availability of the GIS tools for obtaining different LoDs of CityGML standard. Additionally a 3D GIS application that covers a small part of the city of Istanbul was implemented for communicating the thematic information rather than photorealistic visualization by using 3D model. An abstract model was created by using a commercial GIS software modeling tools and the results of the implementation were also presented in the study.

  12. Hybrid 2-D and 3-D Immersive and Interactive User Interface for Scientific Data Visualization

    DTIC Science & Technology

    2017-08-01

    visualization, 3-D interactive visualization, scientific visualization, virtual reality, real -time ray tracing 16. SECURITY CLASSIFICATION OF: 17...scientists to employ in the real world. Other than user-friendly software and hardware setup, scientists also need to be able to perform their usual...and scientific visualization communities mostly have different research priorities. For the VR community, the ability to support real -time user

  13. `We put on the glasses and Moon comes closer!' Urban Second Graders Exploring the Earth, the Sun and Moon Through 3D Technologies in a Science and Literacy Unit

    NASA Astrophysics Data System (ADS)

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day and night, Moon phases and seasons. These modules were used in a science and literacy unit for 35 second graders at an urban elementary school in Midwestern USA. Data included pre- and post-interviews, audio-taped lessons and classroom observations. Post-interviews demonstrated that children's knowledge of the shapes and the movements of the Earth and Moon, alternation of day and night, the occurrence of the seasons, and Moon's changing appearance increased. Second graders reported that they enjoyed expanding their knowledge through hands-on experiences; through its reality effect, 3D visualization enabled them to observe the space objects that move in the virtual space. The teachers noted that 3D visualization stimulated children's interest in space and that using 3D visualization in combination with other teaching methods-literacy experiences, videos and photos, simulations, discussions, and presentations-supported student learning. The teachers and the students still experienced challenges using 3D visualization due to technical problems with 3D vision and time constraints. We conclude that 3D visualization offers hands-on experiences for challenging science concepts and may support young children's ability to view phenomena that would typically be observed through direct, long-term observations in outer space. Results imply a reconsideration of assumed capabilities of young children to understand astronomical phenomena.

  14. Measuring visual discomfort associated with 3D displays

    NASA Astrophysics Data System (ADS)

    Lambooij, M.; Fortuin, M.; Ijsselsteijn, W. A.; Heynderickx, I.

    2009-02-01

    Some people report visual discomfort when watching 3D displays. For both the objective measurement of visual fatigue and the subjective measurement of visual discomfort, we would like to arrive at general indicators that are easy to apply in perception experiments. Previous research yielded contradictory results concerning such indicators. We hypothesize two potential causes for this: 1) not all clinical tests are equally appropriate to evaluate the effect of stereoscopic viewing on visual fatigue, and 2) there is a natural variation in susceptibility to visual fatigue amongst people with normal vision. To verify these hypotheses, we designed an experiment, consisting of two parts. Firstly, an optometric screening was used to differentiate participants in susceptibility to visual fatigue. Secondly, in a 2×2 within-subjects design (2D vs 3D and two-view vs nine-view display), a questionnaire and eight optometric tests (i.e. binocular acuity, fixation disparity with and without fusion lock, heterophoria, convergent and divergent fusion, vergence facility and accommodation response) were administered before and immediately after a reading task. Results revealed that participants found to be more susceptible to visual fatigue during screening showed a clinically meaningful increase in fusion amplitude after having viewed 3D stimuli. Two questionnaire items (i.e., pain and irritation) were significantly affected by the participants' susceptibility, while two other items (i.e., double vision and sharpness) were scored differently between 2D and 3D for all participants. Our results suggest that a combination of fusion range measurements and self-report is appropriate for evaluating visual fatigue related to 3D displays.

  15. Saliency Detection of Stereoscopic 3D Images with Application to Visual Discomfort Prediction

    NASA Astrophysics Data System (ADS)

    Li, Hong; Luo, Ting; Xu, Haiyong

    2017-06-01

    Visual saliency detection is potentially useful for a wide range of applications in image processing and computer vision fields. This paper proposes a novel bottom-up saliency detection approach for stereoscopic 3D (S3D) images based on regional covariance matrix. As for S3D saliency detection, besides the traditional 2D low-level visual features, additional 3D depth features should also be considered. However, only limited efforts have been made to investigate how different features (e.g. 2D and 3D features) contribute to the overall saliency of S3D images. The main contribution of this paper is that we introduce a nonlinear feature integration descriptor, i.e., regional covariance matrix, to fuse both 2D and 3D features for S3D saliency detection. The regional covariance matrix is shown to be effective for nonlinear feature integration by modelling the inter-correlation of different feature dimensions. Experimental results demonstrate that the proposed approach outperforms several existing relevant models including 2D extended and pure 3D saliency models. In addition, we also experimentally verified that the proposed S3D saliency map can significantly improve the prediction accuracy of experienced visual discomfort when viewing S3D images.

  16. Experimental Evidence for Improved Neuroimaging Interpretation Using Three-Dimensional Graphic Models

    ERIC Educational Resources Information Center

    Ruisoto, Pablo; Juanes, Juan Antonio; Contador, Israel; Mayoral, Paula; Prats-Galino, Alberto

    2012-01-01

    Three-dimensional (3D) or volumetric visualization is a useful resource for learning about the anatomy of the human brain. However, the effectiveness of 3D spatial visualization has not yet been assessed systematically. This report analyzes whether 3D volumetric visualization helps learners to identify and locate subcortical structures more…

  17. Optimizing visual comfort for stereoscopic 3D display based on color-plus-depth signals.

    PubMed

    Shao, Feng; Jiang, Qiuping; Fu, Randi; Yu, Mei; Jiang, Gangyi

    2016-05-30

    Visual comfort is a long-facing problem in stereoscopic 3D (S3D) display. In this paper, targeting to produce S3D content based on color-plus-depth signals, a general framework for depth mapping to optimize visual comfort for S3D display is proposed. The main motivation of this work is to remap the depth range of color-plus-depth signals to a new depth range that is suitable to comfortable S3D display. Towards this end, we first remap the depth range globally based on the adjusted zero disparity plane, and then present a two-stage global and local depth optimization solution to solve the visual comfort problem. The remapped depth map is used to generate the S3D output. We demonstrate the power of our approach on perceptually uncomfortable and comfortable stereoscopic images.

  18. Preoperative classification assessment reliability and influence on the length of intertrochanteric fracture operations.

    PubMed

    Shen, Jing; Hu, FangKe; Zhang, LiHai; Tang, PeiFu; Bi, ZhengGang

    2013-04-01

    The accuracy of intertrochanteric fracture classification is important; indeed, the patient outcomes are dependent on their classification. The aim of this study was to use the AO classification system to evaluate the variation in classification between X-ray and computed tomography (CT)/3D CT images. Then, differences in the length of surgery were evaluated based on two examinations. Intertrochanteric fractures were reviewed and surgeons were interviewed. The rates of correct discrimination and misclassification (overestimates and underestimates) probabilities were determined. The impact of misclassification on length of surgery was also evaluated. In total, 370 patents and four surgeons were included in the study. All patients had X-ray images and 210 patients had CT/3D CT images. Of them, 214 and 156 patients were treated by intramedullary and extramedullary fixation systems, respectively. The mean length of surgery was 62.1 ± 17.7 min. The overall rate of correct discrimination was 83.8 % and in the classification of A1, A2 and A3 were 80.0, 85.7 and 82.4 %, respectively. The rate of misclassification showed no significant difference between stable and unstable fractures (21.3 vs 13.1 %, P = 0.173). The overall rates of overestimates and underestimates were significantly different (5 vs 11.25 %, P = 0.041). Subtracting the rate of overestimates from underestimates had a positive correlation with prolonged surgery and showed a significant difference with intramedullary fixation (P < 0.001). Classification based on the AO system was good in terms of consistency. CT/3D CT examination was more reliable and more helpful for preoperative assessment, especially for performance of an intramedullary fixation.

  19. Intracranial MRA: single volume vs. multiple thin slab 3D time-of-flight acquisition.

    PubMed

    Davis, W L; Warnock, S H; Harnsberger, H R; Parker, D L; Chen, C X

    1993-01-01

    Single volume three-dimensional (3D) time-of-flight (TOF) MR angiography is the most commonly used noninvasive method for evaluating the intracranial vasculature. The sensitivity of this technique to signal loss from flow saturation limits its utility. A recently developed multislab 3D TOF technique, MOTSA, is less affected by flow saturation and would therefore be expected to yield improved vessel visualization. To study this hypothesis, intracranial MR angiograms were obtained on 10 volunteers using three techniques: MOTSA, single volume 3D TOF using a standard 4.9 ms TE (3D TOFA), and single volume 3D TOF using a 6.8 ms TE (3D TOFB). All three sets of axial source images and maximum intensity projection (MIP) images were reviewed. Each exam was evaluated for the number of intracranial vessels visualized. A total of 502 vessel segments were studied with each technique. With use of the MIP images, 86% of selected vessels were visualized with MOTSA, 64% with 3D TOFA (TE = 4.9 ms), and 67% with TOFB (TE = 6.8 ms). Similarly, with the axial source images, 91% of selected vessels were visualized with MOTSA, 77% with 3D TOFA (TE = 4.9 ms), and 82% with 3D TOFB (TE = 6.8 ms). There is improved visualization of selected intracranial vessels in normal volunteers with MOTSA as compared with single volume 3D TOF. These improvements are believed to be primarily a result of decreased sensitivity to flow saturation seen with the MOTSA technique. No difference in overall vessel visualization was noted for the two single volume 3D TOF techniques.

  20. The 3D widgets for exploratory scientific visualization

    NASA Technical Reports Server (NTRS)

    Herndon, Kenneth P.; Meyer, Tom

    1995-01-01

    Computational fluid dynamics (CFD) techniques are used to simulate flows of fluids like air or water around such objects as airplanes and automobiles. These techniques usually generate very large amounts of numerical data which are difficult to understand without using graphical scientific visualization techniques. There are a number of commercial scientific visualization applications available today which allow scientists to control visualization tools via textual and/or 2D user interfaces. However, these user interfaces are often difficult to use. We believe that 3D direct-manipulation techniques for interactively controlling visualization tools will provide opportunities for powerful and useful interfaces with which scientists can more effectively explore their datasets. A few systems have been developed which use these techniques. In this paper, we will present a variety of 3D interaction techniques for manipulating parameters of visualization tools used to explore CFD datasets, and discuss in detail various techniques for positioning tools in a 3D scene.

  1. Multisampling suprathreshold perimetry: a comparison with conventional suprathreshold and full-threshold strategies by computer simulation.

    PubMed

    Artes, Paul H; Henson, David B; Harper, Robert; McLeod, David

    2003-06-01

    To compare a multisampling suprathreshold strategy with conventional suprathreshold and full-threshold strategies in detecting localized visual field defects and in quantifying the area of loss. Probability theory was applied to examine various suprathreshold pass criteria (i.e., the number of stimuli that have to be seen for a test location to be classified as normal). A suprathreshold strategy that requires three seen or three missed stimuli per test location (multisampling suprathreshold) was selected for further investigation. Simulation was used to determine how the multisampling suprathreshold, conventional suprathreshold, and full-threshold strategies detect localized field loss. To determine the systematic error and variability in estimates of loss area, artificial fields were generated with clustered defects (0-25 field locations with 8- and 16-dB loss) and, for each condition, the number of test locations classified as defective (suprathreshold strategies) and with pattern deviation probability less than 5% (full-threshold strategy), was derived from 1000 simulated test results. The full-threshold and multisampling suprathreshold strategies had similar sensitivity to field loss. Both detected defects earlier than the conventional suprathreshold strategy. The pattern deviation probability analyses of full-threshold results underestimated the area of field loss. The conventional suprathreshold perimetry also underestimated the defect area. With multisampling suprathreshold perimetry, the estimates of defect area were less variable and exhibited lower systematic error. Multisampling suprathreshold paradigms may be a powerful alternative to other strategies of visual field testing. Clinical trials are needed to verify these findings.

  2. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    NASA Astrophysics Data System (ADS)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  3. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    PubMed Central

    Christensen, Anders S.; Elstner, Marcus; Cui, Qiang

    2015-01-01

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculated at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets. PMID:26328834

  4. Improving intermolecular interactions in DFTB3 using extended polarization from chemical-potential equalization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Christensen, Anders S., E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Cui, Qiang, E-mail: andersx@chem.wisc.edu, E-mail: cui@chem.wisc.edu; Elstner, Marcus

    Semi-empirical quantum mechanical methods traditionally expand the electron density in a minimal, valence-only electron basis set. The minimal-basis approximation causes molecular polarization to be underestimated, and hence intermolecular interaction energies are also underestimated, especially for intermolecular interactions involving charged species. In this work, the third-order self-consistent charge density functional tight-binding method (DFTB3) is augmented with an auxiliary response density using the chemical-potential equalization (CPE) method and an empirical dispersion correction (D3). The parameters in the CPE and D3 models are fitted to high-level CCSD(T) reference interaction energies for a broad range of chemical species, as well as dipole moments calculatedmore » at the DFT level; the impact of including polarizabilities of molecules in the parameterization is also considered. Parameters for the elements H, C, N, O, and S are presented. The Root Mean Square Deviation (RMSD) interaction energy is improved from 6.07 kcal/mol to 1.49 kcal/mol for interactions with one charged species, whereas the RMSD is improved from 5.60 kcal/mol to 1.73 for a set of 9 salt bridges, compared to uncorrected DFTB3. For large water clusters and complexes that are dominated by dispersion interactions, the already satisfactory performance of the DFTB3-D3 model is retained; polarizabilities of neutral molecules are also notably improved. Overall, the CPE extension of DFTB3-D3 provides a more balanced description of different types of non-covalent interactions than Neglect of Diatomic Differential Overlap type of semi-empirical methods (e.g., PM6-D3H4) and PBE-D3 with modest basis sets.« less

  5. Planning, Implementation and Optimization of Future space Missions using an Immersive Visualization Environement (IVE) Machine

    NASA Astrophysics Data System (ADS)

    Harris, E.

    Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.

  6. First responder tracking and visualization for command and control toolkit

    NASA Astrophysics Data System (ADS)

    Woodley, Robert; Petrov, Plamen; Meisinger, Roger

    2010-04-01

    In order for First Responder Command and Control personnel to visualize incidents at urban building locations, DHS sponsored a small business research program to develop a tool to visualize 3D building interiors and movement of First Responders on site. 21st Century Systems, Inc. (21CSI), has developed a toolkit called Hierarchical Grid Referenced Normalized Display (HiGRND). HiGRND utilizes three components to provide a full spectrum of visualization tools to the First Responder. First, HiGRND visualizes the structure in 3D. Utilities in the 3D environment allow the user to switch between views (2D floor plans, 3D spatial, evacuation routes, etc.) and manually edit fast changing environments. HiGRND accepts CAD drawings and 3D digital objects and renders these in the 3D space. Second, HiGRND has a First Responder tracker that uses the transponder signals from First Responders to locate them in the virtual space. We use the movements of the First Responder to map the interior of structures. Finally, HiGRND can turn 2D blueprints into 3D objects. The 3D extruder extracts walls, symbols, and text from scanned blueprints to create the 3D mesh of the building. HiGRND increases the situational awareness of First Responders and allows them to make better, faster decisions in critical urban situations.

  7. Central blood pressure in children and adolescents: non-invasive development and testing of novel transfer functions.

    PubMed

    Cai, T Y; Qasem, A; Ayer, J G; Butlin, M; O'Meagher, S; Melki, C; Marks, G B; Avolio, A; Celermajer, D S; Skilton, M R

    2017-12-01

    Central blood pressure can be estimated from peripheral pulses in adults using generalised transfer functions (TF). We sought to create and test age-specific non-invasively developed TFs in children, with comparison to a pre-existing adult TF. We studied healthy children from two sites at two time points, 8 and 14 years of age, split by site into development and validation groups. Radial and carotid pressure waveforms were obtained by applanation tonometry. Central systolic pressure was derived from carotid waveforms calibrated to brachial mean and diastolic pressures. Age-specific TFs created in the development groups (n=50) were tested in the validation groups aged 8 (n=137) and 14 years (n=85). At 8 years of age, the age-specific TF estimated 82, 99 and 100% of central systolic pressure values within 5, 10 and 15 mm Hg of their measured values, respectively. This TF overestimated central systolic pressure by 2.2 (s.d. 3.7) mm Hg, compared to being underestimated by 5.6 (s.d. 3.9) mm Hg with the adult TF. At 14 years of age, the age-specific TF estimated 60, 87 and 95% of values within 5, 10 and 15 mm Hg of their measured values, respectively. This TF underestimated central systolic pressure by 0.5 (s.d. 6.7) mm Hg, while the adult TF underestimated it by 6.8 (s.d. 6.0) mm Hg. In conclusion, age-specific TFs more accurately predict central systolic pressure measured at the carotid artery in children than an existing adult TF.

  8. Three-dimensional measurement of yarn hairiness via multiperspective images

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Xu, Bugao; Gao, Weidong

    2018-02-01

    Yarn hairiness is one of the essential parameters for assessing yarn quality. Most of the currently used yarn measurement systems are based on two-dimensional (2-D) photoelectric measurements, which are likely to underestimate levels of yarn hairiness because hairy fibers on a yarn surface are often projected or occluded in these 2-D systems. A three-dimensional (3-D) test method for hairiness measurement using a multiperspective imaging system is presented. The system was developed to reconstruct a 3-D yarn model for tracing the actual length of hairy fibers on a yarn surface. Five views of a yarn from different perspectives were created by two angled mirrors and simultaneously captured in one panoramic picture by a camera. A 3-D model was built by extracting the yarn silhouettes in the five views and transferring the silhouettes into a common coordinate system. From the 3-D model, curved hair fibers were traced spatially so that projection and occlusion occurring in the current systems could be avoided. In the experiment, the proposed method was compared with two commercial instruments, i.e., the Uster Tester and Zweigle Tester. It is demonstrated that the length distribution of hairy fibers measured from the 3-D model showed an exponential growth when the fiber length is sorted from shortest to longest. The hairiness measurements, such as H-value, measured by the multiperspective method were highly consistent with those of Uster Tester (r=0.992) but had larger values than those obtained from Uster Tester and Zweigle Tester, proving that the proposed method corrected underestimated hairiness measurements in the commercial systems.

  9. Effects of blurring and vertical misalignment on visual fatigue of stereoscopic displays

    NASA Astrophysics Data System (ADS)

    Baek, Sangwook; Lee, Chulhee

    2015-03-01

    In this paper, we investigate two error issues in stereo images, which may produce visual fatigue. When two cameras are used to produce 3D video sequences, vertical misalignment can be a problem. Although this problem may not occur in professionally produced 3D programs, it is still a major issue in many low-cost 3D programs. Recently, efforts have been made to produce 3D video programs using smart phones or tablets, which may present the vertical alignment problem. Also, in 2D-3D conversion techniques, the simulated frame may have blur effects, which can also introduce visual fatigue in 3D programs. In this paper, to investigate the relationship between these two errors (vertical misalignment and blurring in one image), we performed a subjective test using simulated 3D video sequences that include stereo video sequences with various vertical misalignments and blurring in a stereo image. We present some analyses along with objective models to predict the degree of visual fatigue from vertical misalignment and blurring.

  10. Usability of stereoscopic view in teleoperation

    NASA Astrophysics Data System (ADS)

    Boonsuk, Wutthigrai

    2015-03-01

    Recently, there are tremendous growths in the area of 3D stereoscopic visualization. The 3D stereoscopic visualization technology has been used in a growing number of consumer products such as the 3D televisions and the 3D glasses for gaming systems. This technology refers to the idea that human brain develops depth of perception by retrieving information from the two eyes. Our brain combines the left and right images on the retinas and extracts depth information. Therefore, viewing two video images taken at slightly distance apart as shown in Figure 1 can create illusion of depth [8]. Proponents of this technology argue that the stereo view of 3D visualization increases user immersion and performance as more information is gained through the 3D vision as compare to the 2D view. However, it is still uncertain if additional information gained from the 3D stereoscopic visualization can actually improve user performance in real world situations such as in the case of teleoperation.

  11. Visualization of volumetric seismic data

    NASA Astrophysics Data System (ADS)

    Spickermann, Dela; Böttinger, Michael; Ashfaq Ahmed, Khawar; Gajewski, Dirk

    2015-04-01

    Mostly driven by demands of high quality subsurface imaging, highly specialized tools and methods have been developed to support the processing, visualization and interpretation of seismic data. 3D seismic data acquisition and 4D time-lapse seismic monitoring are well-established techniques in academia and industry, producing large amounts of data to be processed, visualized and interpreted. In this context, interactive 3D visualization methods proved to be valuable for the analysis of 3D seismic data cubes - especially for sedimentary environments with continuous horizons. In crystalline and hard rock environments, where hydraulic stimulation techniques may be applied to produce geothermal energy, interpretation of the seismic data is a more challenging problem. Instead of continuous reflection horizons, the imaging targets are often steep dipping faults, causing a lot of diffractions. Without further preprocessing these geological structures are often hidden behind the noise in the data. In this PICO presentation we will present a workflow consisting of data processing steps, which enhance the signal-to-noise ratio, followed by a visualization step based on the use the commercially available general purpose 3D visualization system Avizo. Specifically, we have used Avizo Earth, an extension to Avizo, which supports the import of seismic data in SEG-Y format and offers easy access to state-of-the-art 3D visualization methods at interactive frame rates, even for large seismic data cubes. In seismic interpretation using visualization, interactivity is a key requirement for understanding complex 3D structures. In order to enable an easy communication of the insights gained during the interactive visualization process, animations of the visualized data were created which support the spatial understanding of the data.

  12. An objective estimate of energy intake during weight gain using the intake-balance method123

    PubMed Central

    Gilmore, L Anne; Ravussin, Eric; Bray, George A; Han, Hongmei; Redman, Leanne M

    2014-01-01

    Background: Estimates of energy intake (EI) in humans have limited validity. Objective: The objective was to test the accuracy and precision of the intake-balance method to estimate EI during weight gain induced by overfeeding. Design: In 2 studies of controlled overfeeding (1 inpatient study and 1 outpatient study), baseline energy requirements were determined by a doubly labeled water study and caloric titration to weight maintenance. Overfeeding was prescribed as 140% of baseline energy requirements for 56 d. Changes in weight, fat mass (FM), and fat-free mass (FFM) were used to estimate change in energy stores (ΔES). Overfeeding EI was estimated as the sum of baseline energy requirements, thermic effect of food, and ΔES. The estimated overfeeding EI was then compared with the actual EI consumed in the metabolic chamber during the last week of overfeeding. Results: In inpatient individuals, calculated EI during overfeeding determined from ΔES in FM and FFM was (mean ± SD) 3461 ± 848 kcal/d, which was not significantly (−29 ± 273 kcal/d or 0.8%; limits of agreement: −564, 505 kcal/d; P = 0.78) different from the actual EI provided (3490 ± 729 kcal/d). Estimated EI determined from ΔES in weight closely estimated actual intake (−7 ± 193 kcal/d or 0.2%; limits of agreement: −386, 370 kcal/d; P = 0.9). In free-living individuals, estimated EI during overfeeding determined from ΔES in FM and FFM was 4123 ± 500 kcal/d and underestimated actual EI (4286 ± 488 kcal/d; −162 ± 301 kcal or 3.8%; limits of agreement: −751, 427 kcal/d; P = 0.003). Estimated EI determined from ΔES in weight also underestimated actual intake (−159 ± 270 kcal/d or 3.7%; limits of agreement: −688, 370 kcal/d; P = 0.001). Conclusion: The intake-balance method can be used to estimate EI during a period of weight gain as a result of 40% overfeeding in individuals who are inpatients or free-living with only a slight underestimate of actual EI by 0.2–3.8%. This trial was registered at clinicaltrials.gov as NCT00565149 and NCT01672632. PMID:25057153

  13. Ion Counting from Explicit-Solvent Simulations and 3D-RISM

    PubMed Central

    Giambaşu, George M.; Luchko, Tyler; Herschlag, Daniel; York, Darrin M.; Case, David A.

    2014-01-01

    The ionic atmosphere around nucleic acids remains only partially understood at atomic-level detail. Ion counting (IC) experiments provide a quantitative measure of the ionic atmosphere around nucleic acids and, as such, are a natural route for testing quantitative theoretical approaches. In this article, we replicate IC experiments involving duplex DNA in NaCl(aq) using molecular dynamics (MD) simulation, the three-dimensional reference interaction site model (3D-RISM), and nonlinear Poisson-Boltzmann (NLPB) calculations and test against recent buffer-equilibration atomic emission spectroscopy measurements. Further, we outline the statistical mechanical basis for interpreting IC experiments and clarify the use of specific concentration scales. Near physiological concentrations, MD simulation and 3D-RISM estimates are close to experimental results, but at higher concentrations (>0.7 M), both methods underestimate the number of condensed cations and overestimate the number of excluded anions. The effect of DNA charge on ion and water atmosphere extends 20–25 Å from its surface, yielding layered density profiles. Overall, ion distributions from 3D-RISMs are relatively close to those from corresponding MD simulations, but with less Na+ binding in grooves and tighter binding to phosphates. NLPB calculations, on the other hand, systematically underestimate the number of condensed cations at almost all concentrations and yield nearly structureless ion distributions that are qualitatively distinct from those generated by both MD simulation and 3D-RISM. These results suggest that MD simulation and 3D-RISM may be further developed to provide quantitative insight into the characterization of the ion atmosphere around nucleic acids and their effect on structure and stability. PMID:24559991

  14. Characteristics of visual fatigue under the effect of 3D animation.

    PubMed

    Chang, Yu-Shuo; Hsueh, Ya-Hsin; Tung, Kwong-Chung; Jhou, Fong-Yi; Lin, David Pei-Cheng

    2015-01-01

    Visual fatigue is commonly encountered in modern life. Clinical visual fatigue characteristics caused by 2-D and 3-D animations may be different, but have not been characterized in detail. This study tried to distinguish the differential effects on visual fatigue caused by 2-D and 3-D animations. A total of 23 volunteers were subjected to accommodation and vergence assessments, followed by a 40-min video game program designed to aggravate their asthenopic symptoms. The volunteers were then assessed for accommodation and vergence parameters again and directed to watch a 5-min 3-D video program, and then assessed again for the parameters. The results support that the 3-D animations caused similar characteristics in vision fatigue parameters in some specific aspects as compared to that caused by 2-D animations. Furthermore, 3-D animations may lead to more exhaustion in both ciliary and extra-ocular muscles, and such differential effects were more evident in the high demand of near vision work. The current results indicated that an arbitrary set of indexes may be promoted in the design of 3-D display or equipments.

  15. D Modelling and Visualization Based on the Unity Game Engine - Advantages and Challenges

    NASA Astrophysics Data System (ADS)

    Buyuksalih, I.; Bayburt, S.; Buyuksalih, G.; Baskaraca, A. P.; Karim, H.; Rahman, A. A.

    2017-11-01

    3D City modelling is increasingly popular and becoming valuable tools in managing big cities. Urban and energy planning, landscape, noise-sewage modelling, underground mapping and navigation are among the applications/fields which really depend on 3D modelling for their effectiveness operations. Several research areas and implementation projects had been carried out to provide the most reliable 3D data format for sharing and functionalities as well as visualization platform and analysis. For instance, BIMTAS company has recently completed a project to estimate potential solar energy on 3D buildings for the whole Istanbul and now focussing on 3D utility underground mapping for a pilot case study. The research and implementation standard on 3D City Model domain (3D data sharing and visualization schema) is based on CityGML schema version 2.0. However, there are some limitations and issues in implementation phase for large dataset. Most of the limitations were due to the visualization, database integration and analysis platform (Unity3D game engine) as highlighted in this paper.

  16. Variability of 4D flow parameters when subjected to changes in MRI acquisition parameters using a realistic thoracic aortic phantom.

    PubMed

    Montalba, Cristian; Urbina, Jesus; Sotelo, Julio; Andia, Marcelo E; Tejos, Cristian; Irarrazaval, Pablo; Hurtado, Daniel E; Valverde, Israel; Uribe, Sergio

    2018-04-01

    To assess the variability of peak flow, mean velocity, stroke volume, and wall shear stress measurements derived from 3D cine phase contrast (4D flow) sequences under different conditions of spatial and temporal resolutions. We performed controlled experiments using a thoracic aortic phantom. The phantom was connected to a pulsatile flow pump, which simulated nine physiological conditions. For each condition, 4D flow data were acquired with different spatial and temporal resolutions. The 2D cine phase contrast and 4D flow data with the highest available spatio-temporal resolution were considered as a reference for comparison purposes. When comparing 4D flow acquisitions (spatial and temporal resolution of 2.0 × 2.0 × 2.0 mm 3 and 40 ms, respectively) with 2D phase-contrast flow acquisitions, the underestimation of peak flow, mean velocity, and stroke volume were 10.5, 10 and 5%, respectively. However, the calculated wall shear stress showed an underestimation larger than 70% for the former acquisition, with respect to 4D flow, with spatial and temporal resolution of 1.0 × 1.0 × 1.0 mm 3 and 20 ms, respectively. Peak flow, mean velocity, and stroke volume from 4D flow data are more sensitive to changes of temporal than spatial resolution, as opposed to wall shear stress, which is more sensitive to changes in spatial resolution. Magn Reson Med 79:1882-1892, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  17. Direct Measurement of Proximal Isovelocity Surface Area by Real-Time Three-Dimensional Color Doppler for Quantitation of Aortic Regurgitant Volume: An In Vitro Validation

    PubMed Central

    Pirat, Bahar; Little, Stephen H.; Igo, Stephen R.; McCulloch, Marti; Nosé, Yukihiko; Hartley, Craig J.; Zoghbi, William A.

    2012-01-01

    Objective The proximal isovelocity surface area (PISA) method is useful in the quantitation of aortic regurgitation (AR). We hypothesized that actual measurement of PISA provided with real-time 3-dimensional (3D) color Doppler yields more accurate regurgitant volumes than those estimated by 2-dimensional (2D) color Doppler PISA. Methods We developed a pulsatile flow model for AR with an imaging chamber in which interchangeable regurgitant orifices with defined shapes and areas were incorporated. An ultrasonic flow meter was used to calculate the reference regurgitant volumes. A total of 29 different flow conditions for 5 orifices with different shapes were tested at a rate of 72 beats/min. 2D PISA was calculated as 2π r2, and 3D PISA was measured from 8 equidistant radial planes of the 3D PISA. Regurgitant volume was derived as PISA × aliasing velocity × time velocity integral of AR/peak AR velocity. Results Regurgitant volumes by flow meter ranged between 12.6 and 30.6 mL/beat (mean 21.4 ± 5.5 mL/beat). Regurgitant volumes estimated by 2D PISA correlated well with volumes measured by flow meter (r = 0.69); however, a significant underestimation was observed (y = 0.5x + 0.6). Correlation with flow meter volumes was stronger for 3D PISA-derived regurgitant volumes (r = 0.83); significantly less underestimation of regurgitant volumes was seen, with a regression line close to identity (y = 0.9x + 3.9). Conclusion Direct measurement of PISA is feasible, without geometric assumptions, using real-time 3D color Doppler. Calculation of aortic regurgitant volumes with 3D color Doppler using this methodology is more accurate than conventional 2D method with hemispheric PISA assumption. PMID:19168322

  18. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    NASA Astrophysics Data System (ADS)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as upgrades that we are planning in the near future which will improve its ease of use and reliability and extend its capabilities.

  19. Tactical 3D Model Generation using Structure-From-Motion on Video from Unmanned Systems

    DTIC Science & Technology

    2015-04-01

    available SfM application known as VisualSFM .6,7 VisualSFM is an end-user, “off-the-shelf” implementation of SfM that is easy to configure and used for...most 3D model generation applications from imagery. While the usual interface with VisualSFM is through their graphical user interface (GUI), we will be...of our system.5 There are two types of 3D model generation available within VisualSFM ; sparse and dense reconstruction. Sparse reconstruction begins

  20. Visual determinants of reduced performance on the Stroop color-word test in normal aging individuals.

    PubMed

    van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J

    2001-10-01

    It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.

  1. Vertical visual features have a strong influence on cuttlefish camouflage.

    PubMed

    Ulmer, K M; Buresch, K C; Kossodo, M M; Mäthger, L M; Siemann, L A; Hanlon, R T

    2013-04-01

    Cuttlefish and other cephalopods use visual cues from their surroundings to adaptively change their body pattern for camouflage. Numerous previous experiments have demonstrated the influence of two-dimensional (2D) substrates (e.g., sand and gravel habitats) on camouflage, yet many marine habitats have varied three-dimensional (3D) structures among which cuttlefish camouflage from predators, including benthic predators that view cuttlefish horizontally against such 3D backgrounds. We conducted laboratory experiments, using Sepia officinalis, to test the relative influence of horizontal versus vertical visual cues on cuttlefish camouflage: 2D patterns on benthic substrates were tested versus 2D wall patterns and 3D objects with patterns. Specifically, we investigated the influence of (i) quantity and (ii) placement of high-contrast elements on a 3D object or a 2D wall, as well as (iii) the diameter and (iv) number of 3D objects with high-contrast elements on cuttlefish body pattern expression. Additionally, we tested the influence of high-contrast visual stimuli covering the entire 2D benthic substrate versus the entire 2D wall. In all experiments, visual cues presented in the vertical plane evoked the strongest body pattern response in cuttlefish. These experiments support field observations that, in some marine habitats, cuttlefish will respond to vertically oriented background features even when the preponderance of visual information in their field of view seems to be from the 2D surrounding substrate. Such choices highlight the selective decision-making that occurs in cephalopods with their adaptive camouflage capability.

  2. Fully automatic three-dimensional visualization of intravascular optical coherence tomography images: methods and feasibility in vivo

    PubMed Central

    Ughi, Giovanni J; Adriaenssens, Tom; Desmet, Walter; D’hooge, Jan

    2012-01-01

    Intravascular optical coherence tomography (IV-OCT) is an imaging modality that can be used for the assessment of intracoronary stents. Recent publications pointed to the fact that 3D visualizations have potential advantages compared to conventional 2D representations. However, 3D imaging still requires a time consuming manual procedure not suitable for on-line application during coronary interventions. We propose an algorithm for a rapid and fully automatic 3D visualization of IV-OCT pullbacks. IV-OCT images are first processed for the segmentation of the different structures. This also allows for automatic pullback calibration. Then, according to the segmentation results, different structures are depicted with different colors to visualize the vessel wall, the stent and the guide-wire in details. Final 3D rendering results are obtained through the use of a commercial 3D DICOM viewer. Manual analysis was used as ground-truth for the validation of the segmentation algorithms. A correlation value of 0.99 and good limits of agreement (Bland Altman statistics) were found over 250 images randomly extracted from 25 in vivo pullbacks. Moreover, 3D rendering was compared to angiography, pictures of deployed stents made available by the manufacturers and to conventional 2D imaging corroborating visualization results. Computational time for the visualization of an entire data sets resulted to be ~74 sec. The proposed method allows for the on-line use of 3D IV-OCT during percutaneous coronary interventions, potentially allowing treatments optimization. PMID:23243578

  3. Global Modeling of Tropospheric Chemistry with Assimilated Meteorology: Model Description and Evaluation

    NASA Technical Reports Server (NTRS)

    Bey, Isabelle; Jacob, Daniel J.; Yantosca, Robert M.; Logan, Jennifer A.; Field, Brendan D.; Fiore, Arlene M.; Li, Qin-Bin; Liu, Hong-Yu; Mickley, Loretta J.; Schultz, Martin G.

    2001-01-01

    We present a first description and evaluation of GEOS-CHEM, a global three-dimensional (3-D) model of tropospheric chemistry driven by assimilated meteorological observations from the Goddard Earth Observing System (GEOS) of the NASA Data Assimilation Office (DAO). The model is applied to a 1-year simulation of tropospheric ozone-NOx-hydrocarbon chemistry for 1994, and is evaluated with observations both for 1994 and for other years. It reproduces usually to within 10 ppb the concentrations of ozone observed from the worldwide ozonesonde data network. It simulates correctly the seasonal phases and amplitudes of ozone concentrations for different regions and altitudes, but tends to underestimate the seasonal amplitude at northern midlatitudes. Observed concentrations of NO and peroxyacetylnitrate (PAN) observed in aircraft campaigns are generally reproduced to within a factor of 2 and often much better. Concentrations of HNO3 in the remote troposphere are overestimated typically by a factor of 2-3, a common problem in global models that may reflect a combination of insufficient precipitation scavenging and gas-aerosol partitioning not resolved by the model. The model yields an atmospheric lifetime of methylchloroform (proxy for global OH) of 5.1 years, as compared to a best estimate from observations of 5.5 plus or minus 0.8 years, and simulates H2O2 concentrations observed from aircraft with significant regional disagreements but no global bias. The OH concentrations are approximately 20% higher than in our previous global 3-D model which included an UV-absorbing aerosol. Concentrations of CO tend to be underestimated by the model, often by 10-30 ppb, which could reflect a combination of excessive OH (a 20% decrease in model OH could be accommodated by the methylchloroform constraint) and an underestimate of CO sources (particularly biogenic). The model underestimates observed acetone concentrations over the South Pacific in fall by a factor of 3; a missing source from the ocean may be implicated.

  4. Accuracy and reproducibility of aortic annular measurements obtained from echocardiographic 3D manual and semi-automated software analyses in patients referred for transcatheter aortic valve implantation: implication for prosthesis size selection.

    PubMed

    Stella, Stefano; Italia, Leonardo; Geremia, Giulia; Rosa, Isabella; Ancona, Francesco; Marini, Claudia; Capogrosso, Cristina; Giglio, Manuela; Montorfano, Matteo; Latib, Azeem; Margonato, Alberto; Colombo, Antonio; Agricola, Eustachio

    2018-02-06

    A 3D transoesophageal echocardiography (3D-TOE) reconstruction tool has recently been introduced. The system automatically configures a geometric model of the aortic root and performs quantitative analysis of these structures. We compared the measurements of the aortic annulus (AA) obtained by semi-automated 3D-TOE quantitative software and manual analysis vs. multislice computed tomography (MSCT) ones. One hundred and seventy-five patients (mean age 81.3 ± 6.3 years, 77 men) who underwent both MSCT and 3D-TOE for annulus assessment before transcatheter aortic valve implantation were analysed. Hypothetical prosthetic valve sizing was evaluated using the 3D manual, semi-automated measurements using manufacturer-recommended CT-based sizing algorithm as gold standard. Good correlation between 3D-TOE methods vs. MSCT measurements was found, but the semi-automated analysis demonstrated slightly better correlations for AA major diameter (r = 0.89), perimeter (r = 0.89), and area (r = 0.85) (all P < 0.0001) than manual one. Both 3D methods underestimated the MSCT measurements, but semi-automated measurements showed narrower limits of agreement and lesser bias than manual measurements for most of AA parameters. On average, 3D-TOE semi-automated major diameter, area, and perimeter underestimated the respective MSCT measurements by 7.4%, 3.5%, and 4.4%, respectively, whereas minor diameter was overestimated by 0.3%. Moderate agreement for valve sizing for both 3D-TOE techniques was found: Kappa agreement 0.5 for both semi-automated and manual analysis. Interobserver and intraobserver agreements for the AA measurements were excellent for both techniques (intraclass correlation coefficients for all parameters >0.80). The 3D-TOE semi-automated analysis of AA is feasible and reliable and can be used in clinical practice as an alternative to MSCT for AA assessment. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author(s) 2018. For permissions, please email: journals.permissions@oup.com.

  5. Interactive 3D visualization for theoretical virtual observatories

    NASA Astrophysics Data System (ADS)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  6. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.

  7. Verifying reddening and extinction for Gaia DR1 TGAS giants

    NASA Astrophysics Data System (ADS)

    Gontcharov, George A.; Mosenkov, Aleksandr V.

    2018-03-01

    Gaia DR1 Tycho-Gaia Astrometric Solution parallaxes, Tycho-2 photometry, and reddening/extinction estimates from nine data sources for 38 074 giants within 415 pc from the Sun are used to compare their position in the Hertzsprung-Russell diagram with theoretical estimates, which are based on the PARSEC and MIST isochrones and the TRILEGAL model of the Galaxy with its parameters being widely varied. We conclude that (1) some systematic errors of the reddening/extinction estimates are the main uncertainty in this study; (2) any emission-based 2D reddening map cannot give reliable estimates of reddening within 415 pc due to a complex distribution of dust; (3) if a TRILEGAL's set of the parameters of the Galaxy is reliable and if the solar metallicity is Z < 0.021, then the reddening at high Galactic latitudes behind the dust layer is underestimated by all 2D reddening maps based on the dust emission observations of IRAS, COBE, and Planck and by their 3D followers (we also discuss some explanations of this underestimation); (4) the reddening/extinction estimates from recent 3D reddening map by Gontcharov, including the median reddening E(B - V) = 0.06 mag at |b| > 50°, give the best fit of the empirical and theoretical data with each other.

  8. 3D Visualization for Planetary Missions

    NASA Astrophysics Data System (ADS)

    DeWolfe, A. W.; Larsen, K.; Brain, D.

    2018-04-01

    We have developed visualization tools for viewing planetary orbiters and science data in 3D for both Earth and Mars, using the Cesium Javascript library, allowing viewers to visualize the position and orientation of spacecraft and science data.

  9. VPython: Writing Real-time 3D Physics Programs

    NASA Astrophysics Data System (ADS)

    Chabay, Ruth

    2001-06-01

    VPython (http://cil.andrew.cmu.edu/projects/visual) combines the Python programming language with an innovative 3D graphics module called Visual, developed by David Scherer. Designed to make 3D physics simulations accessible to novice programmers, VPython allows the programmer to write a purely computational program without any graphics code, and produces an interactive realtime 3D graphical display. In a program 3D objects are created and their positions modified by computational algorithms. Running in a separate thread, the Visual module monitors the positions of these objects and renders them many times per second. Using the mouse, one can zoom and rotate to navigate through the scene. After one hour of instruction, students in an introductory physics course at Carnegie Mellon University, including those who have never programmed before, write programs in VPython to model the behavior of physical systems and to visualize fields in 3D. The Numeric array processing module allows the construction of more sophisticated simulations and models as well. VPython is free and open source. The Visual module is based on OpenGL, and runs on Windows, Linux, and Macintosh.

  10. Delta: a new web-based 3D genome visualization and analysis platform.

    PubMed

    Tang, Bixia; Li, Feifei; Li, Jing; Zhao, Wenming; Zhang, Zhihua

    2018-04-15

    Delta is an integrative visualization and analysis platform to facilitate visually annotating and exploring the 3D physical architecture of genomes. Delta takes Hi-C or ChIA-PET contact matrix as input and predicts the topologically associating domains and chromatin loops in the genome. It then generates a physical 3D model which represents the plausible consensus 3D structure of the genome. Delta features a highly interactive visualization tool which enhances the integration of genome topology/physical structure with extensive genome annotation by juxtaposing the 3D model with diverse genomic assay outputs. Finally, by visually comparing the 3D model of the β-globin gene locus and its annotation, we speculated a plausible transitory interaction pattern in the locus. Experimental evidence was found to support this speculation by literature survey. This served as an example of intuitive hypothesis testing with the help of Delta. Delta is freely accessible from http://delta.big.ac.cn, and the source code is available at https://github.com/zhangzhwlab/delta. zhangzhihua@big.ac.cn. Supplementary data are available at Bioinformatics online.

  11. 3D movies for teaching seafloor bathymetry, plate tectonics, and ocean circulation in large undergraduate classes

    NASA Astrophysics Data System (ADS)

    Peterson, C. D.; Lisiecki, L. E.; Gebbie, G.; Hamann, B.; Kellogg, L. H.; Kreylos, O.; Kronenberger, M.; Spero, H. J.; Streletz, G. J.; Weber, C.

    2015-12-01

    Geologic problems and datasets are often 3D or 4D in nature, yet projected onto a 2D surface such as a piece of paper or a projection screen. Reducing the dimensionality of data forces the reader to "fill in" that collapsed dimension in their minds, creating a cognitive challenge for the reader, especially new learners. Scientists and students can visualize and manipulate 3D datasets using the virtual reality software developed for the immersive, real-time interactive 3D environment at the KeckCAVES at UC Davis. The 3DVisualizer software (Billen et al., 2008) can also operate on a desktop machine to produce interactive 3D maps of earthquake epicenter locations and 3D bathymetric maps of the seafloor. With 3D projections of seafloor bathymetry and ocean circulation proxy datasets in a virtual reality environment, we can create visualizations of carbon isotope (δ13C) records for academic research and to aid in demonstrating thermohaline circulation in the classroom. Additionally, 3D visualization of seafloor bathymetry allows students to see features of seafloor most people cannot observe first-hand. To enhance lessons on mid-ocean ridges and ocean basin genesis, we have created movies of seafloor bathymetry for a large-enrollment undergraduate-level class, Introduction to Oceanography. In the past four quarters, students have enjoyed watching 3D movies, and in the fall quarter (2015), we will assess how well 3D movies enhance learning. The class will be split into two groups, one who learns about the Mid-Atlantic Ridge from diagrams and lecture, and the other who learns with a supplemental 3D visualization. Both groups will be asked "what does the seafloor look like?" before and after the Mid-Atlantic Ridge lesson. Then the whole class will watch the 3D movie and respond to an additional question, "did the 3D visualization enhance your understanding of the Mid-Atlantic Ridge?" with the opportunity to further elaborate on the effectiveness of the visualization.

  12. Stereoscopic display of 3D models for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2006-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  13. Volumetric three-dimensional intravascular ultrasound visualization using shape-based nonlinear interpolation

    PubMed Central

    2013-01-01

    Background Intravascular ultrasound (IVUS) is a standard imaging modality for identification of plaque formation in the coronary and peripheral arteries. Volumetric three-dimensional (3D) IVUS visualization provides a powerful tool to overcome the limited comprehensive information of 2D IVUS in terms of complex spatial distribution of arterial morphology and acoustic backscatter information. Conventional 3D IVUS techniques provide sub-optimal visualization of arterial morphology or lack acoustic information concerning arterial structure due in part to low quality of image data and the use of pixel-based IVUS image reconstruction algorithms. In the present study, we describe a novel volumetric 3D IVUS reconstruction algorithm to utilize IVUS signal data and a shape-based nonlinear interpolation. Methods We developed an algorithm to convert a series of IVUS signal data into a fully volumetric 3D visualization. Intermediary slices between original 2D IVUS slices were generated utilizing the natural cubic spline interpolation to consider the nonlinearity of both vascular structure geometry and acoustic backscatter in the arterial wall. We evaluated differences in image quality between the conventional pixel-based interpolation and the shape-based nonlinear interpolation methods using both virtual vascular phantom data and in vivo IVUS data of a porcine femoral artery. Volumetric 3D IVUS images of the arterial segment reconstructed using the two interpolation methods were compared. Results In vitro validation and in vivo comparative studies with the conventional pixel-based interpolation method demonstrated more robustness of the shape-based nonlinear interpolation algorithm in determining intermediary 2D IVUS slices. Our shape-based nonlinear interpolation demonstrated improved volumetric 3D visualization of the in vivo arterial structure and more realistic acoustic backscatter distribution compared to the conventional pixel-based interpolation method. Conclusions This novel 3D IVUS visualization strategy has the potential to improve ultrasound imaging of vascular structure information, particularly atheroma determination. Improved volumetric 3D visualization with accurate acoustic backscatter information can help with ultrasound molecular imaging of atheroma component distribution. PMID:23651569

  14. Do we overestimate left ventricular ejection fraction by two-dimensional echocardiography in patients with left bundle branch block?

    PubMed

    Cabuk, Ali K; Cabuk, Gizem; Sayin, Ahmet; Karamanlioglu, Murat; Kilicaslan, Barış; Ekmekci, Cenk; Solmaz, Hatice; Aslanturk, Omer F; Ozdogan, Oner

    2018-02-01

    Left bundle branch block (LBBB) causes a dyssynchronized contraction of left ventricle. This is a kind of regional wall-motion abnormality and measuring left ventricular ejection fraction (LVEF) by two-dimensional (2D) echocardiography could be less reliable in this particular condition. Our aim was to evaluate the role of dyssynchrony index (SDI), measured by three-dimensional (3D) echocardiography, in assessment of LVEF and left ventricular volumes accurately in patients with LBBB. In this case-control study, we included 52 of 64 enrolled participants (twelve participants with poor image quality were excluded) with LBBB and normal LVEF or nonischemic cardiomyopathy. Left ventricular ejection fraction (LVEF) and left ventricular volumes were assessed by 2D (modified Simpson's rule) and 3D (four beats full volume analysis) echocardiography and the impact of SDI on results were evaluated. In patients with SDI ≥6%, LVEF measurements were significantly different (46.00% [29.50-52.50] vs 37.60% [24.70-45.15], P < .001) between 2D and 3D echocardiography, respectively. In patients with SDI < 6%, there were no significant differences between two modalities in terms of LVEF measurements (54.50% [49.00-59.00] vs 54.25% [40.00-58.25], P = .193). LV diastolic volumes were not significantly different while systolic volumes were underestimated by 2D echocardiography, and this finding was more pronounced when SDI ≥ 6%. In patients with LBBB and high SDI (≥6%), LVEF values were overestimated and systolic volumes were underestimated by 2D echocardiography compared to 3D echocardiography. © 2017 Wiley Periodicals, Inc.

  15. Quantification of myocardial fibrosis by digital image analysis and interactive stereology

    PubMed Central

    2014-01-01

    Background Cardiac fibrosis disrupts the normal myocardial structure and has a direct impact on heart function and survival. Despite already available digital methods, the pathologist’s visual score is still widely considered as ground truth and used as a primary method in histomorphometric evaluations. The aim of this study was to compare the accuracy of digital image analysis tools and the pathologist’s visual scoring for evaluating fibrosis in human myocardial biopsies, based on reference data obtained by point counting performed on the same images. Methods Endomyocardial biopsy material from 38 patients diagnosed with inflammatory dilated cardiomyopathy was used. The extent of total cardiac fibrosis was assessed by image analysis on Masson’s trichrome-stained tissue specimens using automated Colocalization and Genie software, by Stereology grid count and manually by Pathologist’s visual score. Results A total of 116 slides were analyzed. The mean results obtained by the Colocalization software (13.72 ± 12.24%) were closest to the reference value of stereology (RVS), while the Genie software and Pathologist score gave a slight underestimation. RVS values correlated strongly with values obtained using the Colocalization and Genie (r > 0.9, p < 0.001) software as well as the pathologist visual score. Differences in fibrosis quantification by Colocalization and RVS were statistically insignificant. However, significant bias was found in the results obtained by using Genie versus RVS and pathologist score versus RVS with mean difference values of: -1.61% and 2.24%. Bland-Altman plots showed a bidirectional bias dependent on the magnitude of the measurement: Colocalization software overestimated the area fraction of fibrosis in the lower end, and underestimated in the higher end of the RVS values. Meanwhile, Genie software as well as the pathologist score showed more uniform results throughout the values, with a slight underestimation in the mid-range for both. Conclusion Both applied digital image analysis methods revealed almost perfect correlation with the criterion standard obtained by stereology grid count and, in terms of accuracy, outperformed the pathologist’s visual score. Genie algorithm proved to be the method of choice with the only drawback of a slight underestimation bias, which is considered acceptable for both clinical and research evaluations. Virtual slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/9857909611227193 PMID:24912374

  16. Quantification of myocardial fibrosis by digital image analysis and interactive stereology.

    PubMed

    Daunoravicius, Dainius; Besusparis, Justinas; Zurauskas, Edvardas; Laurinaviciene, Aida; Bironaite, Daiva; Pankuweit, Sabine; Plancoulaine, Benoit; Herlin, Paulette; Bogomolovas, Julius; Grabauskiene, Virginija; Laurinavicius, Arvydas

    2014-06-09

    Cardiac fibrosis disrupts the normal myocardial structure and has a direct impact on heart function and survival. Despite already available digital methods, the pathologist's visual score is still widely considered as ground truth and used as a primary method in histomorphometric evaluations. The aim of this study was to compare the accuracy of digital image analysis tools and the pathologist's visual scoring for evaluating fibrosis in human myocardial biopsies, based on reference data obtained by point counting performed on the same images. Endomyocardial biopsy material from 38 patients diagnosed with inflammatory dilated cardiomyopathy was used. The extent of total cardiac fibrosis was assessed by image analysis on Masson's trichrome-stained tissue specimens using automated Colocalization and Genie software, by Stereology grid count and manually by Pathologist's visual score. A total of 116 slides were analyzed. The mean results obtained by the Colocalization software (13.72 ± 12.24%) were closest to the reference value of stereology (RVS), while the Genie software and Pathologist score gave a slight underestimation. RVS values correlated strongly with values obtained using the Colocalization and Genie (r>0.9, p<0.001) software as well as the pathologist visual score. Differences in fibrosis quantification by Colocalization and RVS were statistically insignificant. However, significant bias was found in the results obtained by using Genie versus RVS and pathologist score versus RVS with mean difference values of: -1.61% and 2.24%. Bland-Altman plots showed a bidirectional bias dependent on the magnitude of the measurement: Colocalization software overestimated the area fraction of fibrosis in the lower end, and underestimated in the higher end of the RVS values. Meanwhile, Genie software as well as the pathologist score showed more uniform results throughout the values, with a slight underestimation in the mid-range for both. Both applied digital image analysis methods revealed almost perfect correlation with the criterion standard obtained by stereology grid count and, in terms of accuracy, outperformed the pathologist's visual score. Genie algorithm proved to be the method of choice with the only drawback of a slight underestimation bias, which is considered acceptable for both clinical and research evaluations. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/9857909611227193.

  17. Reliability of visual and instrumental color matching.

    PubMed

    Igiel, Christopher; Lehmann, Karl Martin; Ghinea, Razvan; Weyhrauch, Michael; Hangx, Ysbrand; Scheller, Herbert; Paravina, Rade D

    2017-09-01

    The aim of this investigation was to evaluate intra-rater and inter-rater reliability of visual and instrumental shade matching. Forty individuals with normal color perception participated in this study. The right maxillary central incisor of a teaching model was prepared and restored with 10 feldspathic all-ceramic crowns of different shades. A shade matching session consisted of the observer (rater) visually selecting the best match by using VITA classical A1-D4 (VC) and VITA Toothguide 3D Master (3D) shade guides and the VITA Easyshade Advance intraoral spectrophotometer (ES) to obtain both VC and 3D matches. Three shade matching sessions were held with 4 to 6 weeks between sessions. Intra-rater reliability was assessed based on the percentage of agreement for the three sessions for the same observer, whereas the inter-rater reliability was calculated as mean percentage of agreement between different observers. The Fleiss' Kappa statistical analysis was used to evaluate visual inter-rater reliability. The mean intra-rater reliability for the visual shade selection was 64(11) for VC and 48(10) for 3D. The corresponding ES values were 96(4) for both VC and 3D. The percentages of observers who matched the same shade with VC and 3D were 55(10) and 43(12), respectively, while corresponding ES values were 88(8) for VC and 92(4) for 3D. The results for visual shade matching exhibited a high to moderate level of inconsistency for both intra-rater and inter-rater comparisons. The VITA Easyshade Advance intraoral spectrophotometer exhibited significantly better reliability compared with visual shade selection. This study evaluates the ability of observers to consistently match the same shade visually and with a dental spectrophotometer in different sessions. The intra-rater and inter-rater reliability (agreement of repeated shade matching) of visual and instrumental tooth color matching strongly suggest the use of color matching instruments as a supplementary tool in everyday dental practice to enhance the esthetic outcome. © 2017 Wiley Periodicals, Inc.

  18. Differential patterns of 2D location versus depth decoding along the visual hierarchy.

    PubMed

    Finlayson, Nonie J; Zhang, Xiaoli; Golomb, Julie D

    2017-02-15

    Visual information is initially represented as 2D images on the retina, but our brains are able to transform this input to perceive our rich 3D environment. While many studies have explored 2D spatial representations or depth perception in isolation, it remains unknown if or how these processes interact in human visual cortex. Here we used functional MRI and multi-voxel pattern analysis to investigate the relationship between 2D location and position-in-depth information. We stimulated different 3D locations in a blocked design: each location was defined by horizontal, vertical, and depth position. Participants remained fixated at the center of the screen while passively viewing the peripheral stimuli with red/green anaglyph glasses. Our results revealed a widespread, systematic transition throughout visual cortex. As expected, 2D location information (horizontal and vertical) could be strongly decoded in early visual areas, with reduced decoding higher along the visual hierarchy, consistent with known changes in receptive field sizes. Critically, we found that the decoding of position-in-depth information tracked inversely with the 2D location pattern, with the magnitude of depth decoding gradually increasing from intermediate to higher visual and category regions. Representations of 2D location information became increasingly location-tolerant in later areas, where depth information was also tolerant to changes in 2D location. We propose that spatial representations gradually transition from 2D-dominant to balanced 3D (2D and depth) along the visual hierarchy. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Intuitive presentation of clinical forensic data using anonymous and person-specific 3D reference manikins.

    PubMed

    Urschler, Martin; Höller, Johannes; Bornik, Alexander; Paul, Tobias; Giretzlehner, Michael; Bischof, Horst; Yen, Kathrin; Scheurer, Eva

    2014-08-01

    The increasing use of CT/MR devices in forensic analysis motivates the need to present forensic findings from different sources in an intuitive reference visualization, with the aim of combining 3D volumetric images along with digital photographs of external findings into a 3D computer graphics model. This model allows a comprehensive presentation of forensic findings in court and enables comparative evaluation studies correlating data sources. The goal of this work was to investigate different methods to generate anonymous and patient-specific 3D models which may be used as reference visualizations. The issue of registering 3D volumetric as well as 2D photographic data to such 3D models is addressed to provide an intuitive context for injury documentation from arbitrary modalities. We present an image processing and visualization work-flow, discuss the major parts of this work-flow, compare the different investigated reference models, and show a number of cases studies that underline the suitability of the proposed work-flow for presenting forensically relevant information in 3D visualizations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  20. Self-estimation of physical ability in stepping over an obstacle is not mediated by visual height perception: a comparison between young and older adults.

    PubMed

    Sakurai, Ryota; Fujiwara, Yoshinori; Ishihara, Masami; Yasunaga, Masashi; Ogawa, Susumu; Suzuki, Hiroyuki; Imanaka, Kuniyasu

    2017-07-01

    Older adults tend to overestimate their step-over ability. However, it is unclear as to whether this is caused by inaccurate self-estimation of physical ability or inaccurate perception of height. We, therefore, measured both visual height perception ability and self-estimation of step-over ability among young and older adults. Forty-seven older and 16 young adults performed a height perception test (HPT) and a step-over test (SOT). Participants visually judged the height of vertical bars from distances of 7 and 1 m away in the HPT, then self-estimated and, subsequently, actually performed a step-over action in the SOT. The results showed no significant difference between young and older adults in visual height perception. In the SOT, young adults tended to underestimate their step-over ability, whereas older adults either overestimated their abilities or underestimated them to a lesser extent than did the young adults. Moreover, visual height perception was not correlated with the self-estimation of step-over ability in both young and older adults. These results suggest that the self-overestimation of step-over ability which appeared in some healthy older adults may not be caused by the nature of visual height perception, but by other factor(s), such as the likely age-related nature of self-estimation of physical ability, per se.

  1. User Centered, Application Independent Visualization of National Airspace Data

    NASA Technical Reports Server (NTRS)

    Murphy, James R.; Hinton, Susan E.

    2011-01-01

    This paper describes an application independent software tool, IV4D, built to visualize animated and still 3D National Airspace System (NAS) data specifically for aeronautics engineers who research aggregate, as well as single, flight efficiencies and behavior. IV4D was origin ally developed in a joint effort between the National Aeronautics and Space Administration (NASA) and the Air Force Research Laboratory (A FRL) to support the visualization of air traffic data from the Airspa ce Concept Evaluation System (ACES) simulation program. The three mai n challenges tackled by IV4D developers were: 1) determining how to d istill multiple NASA data formats into a few minimal dataset types; 2 ) creating an environment, consisting of a user interface, heuristic algorithms, and retained metadata, that facilitates easy setup and fa st visualization; and 3) maximizing the user?s ability to utilize the extended range of visualization available with AFRL?s existing 3D te chnologies. IV4D is currently being used by air traffic management re searchers at NASA?s Ames and Langley Research Centers to support data visualizations.

  2. Visualization Improves Supraclavicular Access to the Subclavian Vein in a Mixed Reality Simulator.

    PubMed

    Sappenfield, Joshua Warren; Smith, William Brit; Cooper, Lou Ann; Lizdas, David; Gonsalves, Drew B; Gravenstein, Nikolaus; Lampotang, Samsun; Robinson, Albert R

    2018-07-01

    We investigated whether visual augmentation (3D, real-time, color visualization) of a procedural simulator improved performance during training in the supraclavicular approach to the subclavian vein, not as widely known or used as its infraclavicular counterpart. To train anesthesiology residents to access a central vein, a mixed reality simulator with emulated ultrasound imaging was created using an anatomically authentic, 3D-printed, physical mannequin based on a computed tomographic scan of an actual human. The simulator has a corresponding 3D virtual model of the neck and upper chest anatomy. Hand-held instruments such as a needle, an ultrasound probe, and a virtual camera controller are directly manipulated by the trainee and tracked and recorded with submillimeter resolution via miniature, 6 degrees of freedom magnetic sensors. After Institutional Review Board approval, 69 anesthesiology residents and faculty were enrolled and received scripted instructions on how to perform subclavian venous access using the supraclavicular approach based on anatomic landmarks. The volunteers were randomized into 2 cohorts. The first used real-time 3D visualization concurrently with trial 1, but not during trial 2. The second did not use real-time 3D visualization concurrently with trial 1 or 2. However, after trial 2, they observed a 3D visualization playback of trial 2 before performing trial 3 without visualization. An automated scoring system based on time, success, and errors/complications generated objective performance scores. Nonparametric statistical methods were used to compare the scores between subsequent trials, differences between groups (real-time visualization versus no visualization versus delayed visualization), and improvement in scores between trials within groups. Although the real-time visualization group demonstrated significantly better performance than the delayed visualization group on trial 1 (P = .01), there was no difference in gain scores, between performance on the first trial and performance on the final trial, that were dependent on group (P = .13). In the delayed visualization group, the difference in performance between trial 1 and trial 2 was not significant (P = .09); reviewing performance on trial 2 before trial 3 resulted in improved performance when compared to trial 1 (P < .0001). There was no significant difference in median scores (P = .13) between the real-time visualization and delayed visualization groups for the last trial after both groups had received visualization. Participants reported a significant improvement in confidence in performing supraclavicular access to the subclavian vein. Standard deviations of scores, a measure of performance variability, decreased in the delayed visualization group after viewing the visualization. Real-time visual augmentation (3D visualization) in the mixed reality simulator improved performance during supraclavicular access to the subclavian vein. No difference was seen in the final trial of the group that received real-time visualization compared to the group that had delayed visualization playback of their prior attempt. Training with the mixed reality simulator improved participant confidence in performing an unfamiliar technique.

  3. Denoising imaging polarimetry by adapted BM3D method.

    PubMed

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  4. Quantification of functional mitral regurgitation by real-time 3D echocardiography: comparison with 3D velocity-encoded cardiac magnetic resonance.

    PubMed

    Marsan, Nina Ajmone; Westenberg, Jos J M; Ypenburg, Claudia; Delgado, Victoria; van Bommel, Rutger J; Roes, Stijntje D; Nucifora, Gaetano; van der Geest, Rob J; de Roos, Albert; Reiber, Johan C; Schalij, Martin J; Bax, Jeroen J

    2009-11-01

    The aim of this study was to evaluate feasibility and accuracy of real-time 3-dimensional (3D) echocardiography for quantification of mitral regurgitation (MR), in a head-to-head comparison with velocity-encoded cardiac magnetic resonance (VE-CMR). Accurate grading of MR severity is crucial for appropriate patient management but remains challenging. VE-CMR with 3D three-directional acquisition has been recently proposed as the reference method. A total of 64 patients with functional MR were included. A VE-CMR acquisition was applied to quantify mitral regurgitant volume (Rvol). Color Doppler 3D echocardiography was applied for direct measurement, in "en face" view, of mitral effective regurgitant orifice area (EROA); Rvol was subsequently calculated as EROA multiplied by the velocity-time integral of the regurgitant jet on the continuous-wave Doppler. To assess the relative potential error of the conventional approach, color Doppler 2-dimensional (2D) echocardiography was performed: vena contracta width was measured in the 4-chamber view and EROA calculated as circular (EROA-4CH); EROA was also calculated as elliptical (EROA-elliptical), measuring vena contracta also in the 2-chamber view. From these 2D measurements of EROA, the Rvols were also calculated. The EROA measured by 3D echocardiography was significantly higher than EROA-4CH (p < 0.001) and EROA-elliptical (p < 0.001), with a significant bias between these measurements (0.10 cm(2) and 0.06 cm(2), respectively). Rvol measured by 3D echocardiography showed excellent correlation with Rvol measured by CMR (r = 0.94), without a significant difference between these techniques (mean difference = -0.08 ml/beat). Conversely, 2D echocardiographic approach from the 4-chamber view significantly underestimated Rvol (p = 0.006) as compared with CMR (mean difference = 2.9 ml/beat). The 2D elliptical approach demonstrated a better agreement with CMR (mean difference = -1.6 ml/beat, p = 0.04). Quantification of EROA and Rvol of functional MR with 3D echocardiography is feasible and accurate as compared with VE-CMR; the currently recommended 2D echocardiographic approach significantly underestimates both EROA and Rvol.

  5. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data.

    PubMed

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-09-18

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. A low-latency, big database system and browser for storage, querying and visualization of 3D genomic data

    PubMed Central

    Butyaev, Alexander; Mavlyutov, Ruslan; Blanchette, Mathieu; Cudré-Mauroux, Philippe; Waldispühl, Jérôme

    2015-01-01

    Recent releases of genome three-dimensional (3D) structures have the potential to transform our understanding of genomes. Nonetheless, the storage technology and visualization tools need to evolve to offer to the scientific community fast and convenient access to these data. We introduce simultaneously a database system to store and query 3D genomic data (3DBG), and a 3D genome browser to visualize and explore 3D genome structures (3DGB). We benchmark 3DBG against state-of-the-art systems and demonstrate that it is faster than previous solutions, and importantly gracefully scales with the size of data. We also illustrate the usefulness of our 3D genome Web browser to explore human genome structures. The 3D genome browser is available at http://3dgb.cs.mcgill.ca/. PMID:25990738

  7. D Photographs in Cultural Heritage

    NASA Astrophysics Data System (ADS)

    Schuhr, W.; Lee, J. D.; Kiel, St.

    2013-07-01

    This paper on providing "oo-information" (= objective object-information) on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality"), 3D photography support, e.g. the recording, the visualization, the interpretation, the preservation and the restoration of architectural and archaeological objects. This also includes samples for excavation documentation, 3D coordinate calculation, 3D photographs applied for virtual museum purposes and as educational tools. In addition 3D photography is used for virtual museum purposes, as well as an educational tool and for spatial structure enhancement, which in particular holds for inscriptions and in rock arts. This paper is also an invitation to participate in a systematic survey on existing international archives of 3D photographs. In this respect it is also reported on first results, to define an optimum digitization rate for analog stereo views. It is more than overdue, in addition to the access to international archives for 3D photography, the available 3D photography data should appear in a global GIS(cloud)-system, like on, e.g., google earth. This contribution also deals with exposing new 3D photographs to document monuments of importance for Cultural Heritage, including the use of 3D and single lense cameras from a 10m telescope staff, to be used for extremely low earth based airborne 3D photography, as well as for "underwater staff photography". In addition it is reported on the use of captive balloon and drone platforms for 3D photography in Cultural Heritage. It is liked to emphasize, the still underestimated 3D effect on real objects even allows, e.g., the spatial perception of extremely small scratches as well as of nuances in color differences. Though 3D photographs are a well established basic photographic and photogrammetric tool, they are still a matter of research and practical improvement: - For example, multistage concepts for 3D heritage photographs, e.g., combining before and aft images and images showing different focus, daytime etc., as well as combining 3D imagery of different sensors and comparing 3D imagery with drawings etc. and even standards for exposing and processing 3D heritage photographs are only some topics for recent research. - To advise on state-of-the-art 3D visualisation methodology for Cultural heritage purposes an updated synoptically overview, even claiming completeness, also will be dealt with. - 3D photographs increasingly should replace old fashioned subjective interpreted manual 2D drawings (in 2D only) of heritage monuments. - Currently we are witnesses of early developments, showing Cultural Heritage objects in 3D crystal as well as in 3D printings.

  8. Intelligent Visualization of Geo-Information on the Future Web

    NASA Astrophysics Data System (ADS)

    Slusallek, P.; Jochem, R.; Sons, K.; Hoffmann, H.

    2012-04-01

    Visualization is a key component of the "Observation Web" and will become even more important in the future as geo data becomes more widely accessible. The common statement that "Data that cannot be seen, does not exist" is especially true for non-experts, like most citizens. The Web provides the most interesting platform for making data easily and widely available. However, today's Web is not well suited for the interactive visualization and exploration that is often needed for geo data. Support for 3D data was added only recently and at an extremely low level (WebGL), but even the 2D visualization capabilities of HTML e.g. (images, canvas, SVG) are rather limited, especially regarding interactivity. We have developed XML3D as an extension to HTML-5. It allows for compactly describing 2D and 3D data directly as elements of an HTML-5 document. All graphics elements are part of the Document Object Model (DOM) and can be manipulated via the same set of DOM events and methods that millions of Web developers use on a daily basis. Thus, XML3D makes highly interactive 2D and 3D visualization easily usable, not only for geo data. XML3D is supported by any WebGL-capable browser but we also provide native implementations in Firefox and Chromium. As an example, we show how OpenStreetMap data can be mapped directly to XML3D and visualized interactively in any Web page. We show how this data can be easily augmented with additional data from the Web via a few lines of Javascript. We also show how embedded semantic data (via RDFa) allows for linking the visualization back to the data's origin, thus providing an immersive interface for interacting with and modifying the original data. XML3D is used as key input for standardization within the W3C Community Group on "Declarative 3D for the Web" chaired by the DFKI and has recently been selected as one of the Generic Enabler for the EU Future Internet initiative.

  9. Desktop Cloud Visualization: the new technology to remote access 3D interactive applications in the Cloud.

    PubMed

    Torterolo, Livia; Ruffino, Francesco

    2012-01-01

    In the proposed demonstration we will present DCV (Desktop Cloud Visualization): a unique technology that allows users to remote access 2D and 3D interactive applications over a standard network. This allows geographically dispersed doctors work collaboratively and to acquire anatomical or pathological images and visualize them for further investigations.

  10. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    PubMed Central

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-01-01

    Background Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: . Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine. PMID:17937818

  11. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data.

    PubMed

    Moore, Eider B; Poliakov, Andrew V; Lincoln, Peter; Brinkley, James F

    2007-10-15

    Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  12. Development of a Top-View Numeric Coding Teaching-Learning Trajectory within an Elementary Grades 3-D Visualization Design Research Project

    ERIC Educational Resources Information Center

    Sack, Jacqueline J.

    2013-01-01

    This article explicates the development of top-view numeric coding of 3-D cube structures within a design research project focused on 3-D visualization skills for elementary grades children. It describes children's conceptual development of 3-D cube structures using concrete models, conventional 2-D pictures and abstract top-view numeric…

  13. Toward the establishment of design guidelines for effective 3D perspective interfaces

    NASA Astrophysics Data System (ADS)

    Fitzhugh, Elisabeth; Dixon, Sharon; Aleva, Denise; Smith, Eric; Ghrayeb, Joseph; Douglas, Lisa

    2009-05-01

    The propagation of information operation technologies, with correspondingly vast amounts of complex network information to be conveyed, significantly impacts operator workload. Information management research is rife with efforts to develop schemes to aid operators to identify, review, organize, and retrieve the wealth of available data. Data may take on such distinct forms as intelligence libraries, logistics databases, operational environment models, or network topologies. Increased use of taxonomies and semantic technologies opens opportunities to employ network visualization as a display mechanism for diverse information aggregations. The broad applicability of network visualizations is still being tested, but in current usage, the complexity of densely populated abstract networks suggests the potential utility of 3D. Employment of 2.5D in network visualization, using classic perceptual cues, creates a 3D experience within a 2D medium. It is anticipated that use of 3D perspective (2.5D) will enhance user ability to visually inspect large, complex, multidimensional networks. Current research for 2.5D visualizations demonstrates that display attributes, including color, shape, size, lighting, atmospheric effects, and shadows, significantly impact operator experience. However, guidelines for utilization of attributes in display design are limited. This paper discusses pilot experimentation intended to identify potential problem areas arising from these cues and determine how best to optimize perceptual cue settings. Development of optimized design guidelines will ensure that future experiments, comparing network displays with other visualizations, are not confounded or impeded by suboptimal attribute characterization. Current experimentation is anticipated to support development of cost-effective, visually effective methods to implement 3D in military applications.

  14. A framework for breast cancer visualization using augmented reality x-ray vision technique in mobile technology

    NASA Astrophysics Data System (ADS)

    Rahman, Hameedur; Arshad, Haslina; Mahmud, Rozi; Mahayuddin, Zainal Rasyid

    2017-10-01

    Breast Cancer patients who require breast biopsy has increased over the past years. Augmented Reality guided core biopsy of breast has become the method of choice for researchers. However, this cancer visualization has limitations to the extent of superimposing the 3D imaging data only. In this paper, we are introducing an Augmented Reality visualization framework that enables breast cancer biopsy image guidance by using X-Ray vision technique on a mobile display. This framework consists of 4 phases where it initially acquires the image from CT/MRI and process the medical images into 3D slices, secondly it will purify these 3D grayscale slices into 3D breast tumor model using 3D modeling reconstruction technique. Further, in visualization processing this virtual 3D breast tumor model has been enhanced using X-ray vision technique to see through the skin of the phantom and the final composition of it is displayed on handheld device to optimize the accuracy of the visualization in six degree of freedom. The framework is perceived as an improved visualization experience because the Augmented Reality x-ray vision allowed direct understanding of the breast tumor beyond the visible surface and direct guidance towards accurate biopsy targets.

  15. Human factors guidelines for applications of 3D perspectives: a literature review

    NASA Astrophysics Data System (ADS)

    Dixon, Sharon; Fitzhugh, Elisabeth; Aleva, Denise

    2009-05-01

    Once considered too processing-intense for general utility, application of the third dimension to convey complex information is facilitated by the recent proliferation of technological advancements in computer processing, 3D displays, and 3D perspective (2.5D) renderings within a 2D medium. The profusion of complex and rapidly-changing dynamic information being conveyed in operational environments has elevated interest in possible military applications of 3D technologies. 3D can be a powerful mechanism for clearer information portrayal, facilitating rapid and accurate identification of key elements essential to mission performance and operator safety. However, implementation of 3D within legacy systems can be costly, making integration prohibitive. Therefore, identifying which tasks may benefit from 3D or 2.5D versus simple 2D visualizations is critical. Unfortunately, there is no "bible" of human factors guidelines for usability optimization of 2D, 2.5D, or 3D visualizations nor for determining which display best serves a particular application. Establishing such guidelines would provide an invaluable tool for designers and operators. Defining issues common to each will enhance design effectiveness. This paper presents the results of an extensive review of open source literature addressing 3D information displays, with particular emphasis on comparison of true 3D with 2D and 2.5D representations and their utility for military tasks. Seventy-five papers are summarized, highlighting militarily relevant applications of 3D visualizations and 2.5D perspective renderings. Based on these findings, human factors guidelines for when and how to use these visualizations, along with recommendations for further research are discussed.

  16. The role of extra-foveal processing in 3D imaging

    NASA Astrophysics Data System (ADS)

    Eckstein, Miguel P.; Lago, Miguel A.; Abbey, Craig K.

    2017-03-01

    The field of medical image quality has relied on the assumption that metrics of image quality for simple visual detection tasks are a reliable proxy for the more clinically realistic visual search tasks. Rank order of signal detectability across conditions often generalizes from detection to search tasks. Here, we argue that search in 3D images represents a paradigm shift in medical imaging: radiologists typically cannot exhaustively scrutinize all regions of interest with the high acuity fovea requiring detection of signals with extra-foveal areas (visual periphery) of the human retina. We hypothesize that extra-foveal processing can alter the detectability of certain types of signals in medical images with important implications for search in 3D medical images. We compare visual search of two different types of signals in 2D vs. 3D images. We show that a small microcalcification-like signal is more highly detectable than a larger mass-like signal in 2D search, but its detectability largely decreases (relative to the larger signal) in the 3D search task. Utilizing measurements of observer detectability as a function retinal eccentricity and observer eye fixations we can predict the pattern of results in the 2D and 3D search studies. Our findings: 1) suggest that observer performance findings with 2D search might not always generalize to 3D search; 2) motivate the development of a new family of model observers that take into account the inhomogeneous visual processing across the retina (foveated model observers).

  17. DataViewer3D: An Open-Source, Cross-Platform Multi-Modal Neuroimaging Data Visualization Tool

    PubMed Central

    Gouws, André; Woods, Will; Millman, Rebecca; Morland, Antony; Green, Gary

    2008-01-01

    Integration and display of results from multiple neuroimaging modalities [e.g. magnetic resonance imaging (MRI), magnetoencephalography, EEG] relies on display of a diverse range of data within a common, defined coordinate frame. DataViewer3D (DV3D) is a multi-modal imaging data visualization tool offering a cross-platform, open-source solution to simultaneous data overlay visualization requirements of imaging studies. While DV3D is primarily a visualization tool, the package allows an analysis approach where results from one imaging modality can guide comparative analysis of another modality in a single coordinate space. DV3D is built on Python, a dynamic object-oriented programming language with support for integration of modular toolkits, and development of cross-platform software for neuroimaging. DV3D harnesses the power of the Visualization Toolkit (VTK) for two-dimensional (2D) and 3D rendering, calling VTK's low level C++ functions from Python. Users interact with data via an intuitive interface that uses Python to bind wxWidgets, which in turn calls the user's operating system dialogs and graphical user interface tools. DV3D currently supports NIfTI-1, ANALYZE™ and DICOM formats for MRI data display (including statistical data overlay). Formats for other data types are supported. The modularity of DV3D and ease of use of Python allows rapid integration of additional format support and user development. DV3D has been tested on Mac OSX, RedHat Linux and Microsoft Windows XP. DV3D is offered for free download with an extensive set of tutorial resources and example data. PMID:19352444

  18. DspaceOgre 3D Graphics Visualization Tool

    NASA Technical Reports Server (NTRS)

    Jain, Abhinandan; Myin, Steven; Pomerantz, Marc I.

    2011-01-01

    This general-purpose 3D graphics visualization C++ tool is designed for visualization of simulation and analysis data for articulated mechanisms. Examples of such systems are vehicles, robotic arms, biomechanics models, and biomolecular structures. DspaceOgre builds upon the open-source Ogre3D graphics visualization library. It provides additional classes to support the management of complex scenes involving multiple viewpoints and different scene groups, and can be used as a remote graphics server. This software provides improved support for adding programs at the graphics processing unit (GPU) level for improved performance. It also improves upon the messaging interface it exposes for use as a visualization server.

  19. The Digital Space Shuttle, 3D Graphics, and Knowledge Management

    NASA Technical Reports Server (NTRS)

    Gomez, Julian E.; Keller, Paul J.

    2003-01-01

    The Digital Shuttle is a knowledge management project that seeks to define symbiotic relationships between 3D graphics and formal knowledge representations (ontologies). 3D graphics provides geometric and visual content, in 2D and 3D CAD forms, and the capability to display systems knowledge. Because the data is so heterogeneous, and the interrelated data structures are complex, 3D graphics combined with ontologies provides mechanisms for navigating the data and visualizing relationships.

  20. A linear model fails to predict orientation selectivity of cells in the cat visual cortex.

    PubMed Central

    Volgushev, M; Vidyasagar, T R; Pei, X

    1996-01-01

    1. Postsynaptic potentials (PSPs) evoked by visual stimulation in simple cells in the cat visual cortex were recorded using in vivo whole-cell technique. Responses to small spots of light presented at different positions over the receptive field and responses to elongated bars of different orientations centred on the receptive field were recorded. 2. To test whether a linear model can account for orientation selectivity of cortical neurones, responses to elongated bars were compared with responses predicted by a linear model from the receptive field map obtained from flashing spots. 3. The linear model faithfully predicted the preferred orientation, but not the degree of orientation selectivity or the sharpness of orientation tuning. The ratio of optimal to non-optimal responses was always underestimated by the model. 4. Thus non-linear mechanisms, which can include suppression of non-optimal responses and/or amplification of optimal responses, are involved in the generation of orientation selectivity in the primary visual cortex. PMID:8930828

  1. Comparing surgically induced astigmatism calculated by means of simulated keratometry versus total corneal refractive power.

    PubMed

    Garzón, Nuria; Rodríguez-Vallejo, Manuel; Carmona, David; Calvo-Sanz, Jorge A; Poyales, Francisco; Palomino, Carlos; Zato-Gómez de Liaño, Miguel Á; Fernández, Joaquín

    2018-03-01

    To evaluate surgically induced astigmatism as computed by means of either simulated keratometry (K SIM ) or total corneal refractive power (TCRP) after temporal incisions. Prospective observational study including 36 right eyes undergoing cataract surgery. Astigmatism was measured preoperatively during the 3-month follow-up period using Pentacam. Surgically induced astigmatism was computed considering anterior corneal surface astigmatism at 3 mm with K SIM and considering both corneal surfaces with TCRP from 1 to 8 mm (TCRP 3 for 3 mm). The eyes under study were divided into two balanced groups: LOW with K SIM astigmatism <0.90 D and HIGH with K SIM astigmatism ≥0.90 D. Resulting surgically induced astigmatism values were compared across groups and measuring techniques by means of flattening, steepening, and torque analysis. Mean surgically induced astigmatism was higher in the HIGH group (0.31 D @ 102°) than in the LOW group (0.04 D @ 16°). The temporal incision resulted in a steepening in the HIGH group of 0.15 D @ 90°, as estimated with K SIM , versus 0.28 D @ 90° with TCRP 3 , but no significant differences were found for the steepening in the LOW group or for the torque in either group. Differences between K SIM - and TCRP 3 -based surgically induced astigmatism values were negligible in LOW group. Surgically induced astigmatism was considerably higher in the high-astigmatism group and its value was underestimated with the K SIM approach. Eyes having low astigmatism should not be included for computing the surgically induced astigmatism because steepening would be underestimated.

  2. An Update on Design Tools for Optimization of CMC 3D Fiber Architectures

    NASA Technical Reports Server (NTRS)

    Lang, J.; DiCarlo, J.

    2012-01-01

    Objective: Describe and up-date progress for NASA's efforts to develop 3D architectural design tools for CMC in general and for SIC/SiC composites in particular. Describe past and current sequential work efforts aimed at: Understanding key fiber and tow physical characteristics in conventional 2D and 3D woven architectures as revealed by microstructures in the literature. Developing an Excel program for down-selecting and predicting key geometric properties and resulting key fiber-controlled properties for various conventional 3D architectures. Developing a software tool for accurately visualizing all the key geometric details of conventional 3D architectures. Validating tools by visualizing and predicting the Internal geometry and key mechanical properties of a NASA SIC/SIC panel with a 3D orthogonal architecture. Applying the predictive and visualization tools toward advanced 3D orthogonal SiC/SIC composites, and combining them into a user-friendly software program.

  3. Threshold and variability properties of matrix frequency-doubling technology and standard automated perimetry in glaucoma.

    PubMed

    Artes, Paul H; Hutchison, Donna M; Nicolela, Marcelo T; LeBlanc, Raymond P; Chauhan, Balwantray C

    2005-07-01

    To compare test results from second-generation Frequency-Doubling Technology perimetry (FDT2, Humphrey Matrix; Carl-Zeiss Meditec, Dublin, CA) and standard automated perimetry (SAP) in patients with glaucoma. Specifically, to examine the relationship between visual field sensitivity and test-retest variability and to compare total and pattern deviation probability maps between both techniques. Fifteen patients with glaucoma who had early to moderately advanced visual field loss with SAP (mean MD, -4.0 dB; range, +0.2 to -16.1) were enrolled in the study. Patients attended three sessions. During each session, one eye was examined twice with FDT2 (24-2 threshold test) and twice with SAP (Swedish Interactive Threshold Algorithm [SITA] Standard 24-2 test), in random order. We compared threshold values between FDT2 and SAP at test locations with similar visual field coordinates. Test-retest variability, established in terms of test-retest intervals and standard deviations (SDs), was investigated as a function of visual field sensitivity (estimated by baseline threshold and mean threshold, respectively). The magnitude of visual field defects apparent in total and pattern deviation probability maps were compared between both techniques by ordinal scoring. The global visual field indices mean deviation (MD) and pattern standard deviation (PSD) of FDT2 and SAP correlated highly (r > 0.8; P < 0.001). At test locations with high sensitivity (>25 dB with SAP), threshold estimates from FDT2 and SAP exhibited a close, linear relationship, with a slope of approximately 2.0. However, at test locations with lower sensitivity, the relationship was much weaker and ceased to be linear. In comparison with FDT2, SAP showed a slightly larger proportion of test locations with absolute defects (3.0% vs. 2.2% with SAP and FDT2, respectively, P < 0.001). Whereas SAP showed a significant increase in test-retest variability at test locations with lower sensitivity (P < 0.001), there was no relationship between variability and sensitivity with FDT2 (P = 0.46). In comparison with SAP, FDT2 exhibited narrower test-retest intervals at test locations with lower sensitivity (SAP thresholds <25 dB). A comparison of the total and pattern deviation maps between both techniques showed that the total deviation analyses of FDT2 may slightly underestimate the visual field loss apparent with SAP. However, the pattern-deviation maps of both instruments agreed well with each other. The test-retest variability of FDT2 is uniform over the measurement range of the instrument. These properties may provide advantages for the monitoring of patients with glaucoma that should be investigated in longitudinal studies.

  4. Visualizing Terrestrial and Aquatic Systems in 3-D

    EPA Science Inventory

    The environmental modeling community has a long-standing need for affordable, easy-to-use tools that support 3-D visualization of complex spatial and temporal model output. The Visualization of Terrestrial and Aquatic Systems project (VISTAS) aims to help scientists produce effe...

  5. Illustrative visualization of 3D city models

    NASA Astrophysics Data System (ADS)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  6. Early detection of glaucoma by means of a novel 3D computer‐automated visual field test

    PubMed Central

    Nazemi, Paul P; Fink, Wolfgang; Sadun, Alfredo A; Francis, Brian; Minckler, Donald

    2007-01-01

    Purpose A recently devised 3D computer‐automated threshold Amsler grid test was used to identify early and distinctive defects in people with suspected glaucoma. Further, the location, shape and depth of these field defects were characterised. Finally, the visual fields were compared with those obtained by standard automated perimetry. Patients and methods Glaucoma suspects were defined as those having elevated intraocular pressure (>21 mm Hg) or cup‐to‐disc ratio of >0.5. 33 patients and 66 eyes with risk factors for glaucoma were examined. 15 patients and 23 eyes with no risk factors were tested as controls. The recently developed 3D computer‐automated threshold Amsler grid test was used. The test exhibits a grid on a computer screen at a preselected greyscale and angular resolution, and allows patients to trace those areas on the grid that are missing in their visual field using a touch screen. The 5‐minute test required that the patients repeatedly outline scotomas on a touch screen with varied displays of contrast while maintaining their gaze on a central fixation marker. A 3D depiction of the visual field defects was then obtained that was further characterised by the location, shape and depth of the scotomas. The exam was repeated three times per eye. The results were compared to Humphrey visual field tests (ie, achromatic standard or SITA standard 30‐2 or 24‐2). Results In this pilot study 79% of the eyes tested in the glaucoma‐suspect group repeatedly demonstrated visual field loss with the 3D perimetry. The 3D depictions of visual field loss associated with these risk factors were all characteristic of or compatible with glaucoma. 71% of the eyes demonstrated arcuate defects or a nasal step. Constricted visual fields were shown in 29% of the eyes. No visual field changes were detected in the control group. Conclusions The 3D computer‐automated threshold Amsler grid test may demonstrate visual field abnormalities characteristic of glaucoma in glaucoma suspects with normal achromatic Humphrey visual field testing. This test may be used as a screening tool for the early detection of glaucoma. PMID:17504855

  7. Automated virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.

    1997-05-01

    Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.

  8. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    NASA Astrophysics Data System (ADS)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  9. Thyroid gland visualization with 3D/4D ultrasound: integrated hands-on imaging in anatomical dissection laboratory.

    PubMed

    Carter, John L; Patel, Ankura; Hocum, Gabriel; Benninger, Brion

    2017-05-01

    In teaching anatomy, clinical imaging has been utilized to supplement the traditional dissection laboratory promoting education through visualization of spatial relationships of anatomical structures. Viewing the thyroid gland using 3D/4D ultrasound can be valuable to physicians as well as students learning anatomy. The objective of this study was to investigate the perceptions of first-year medical students regarding the integration of 3D/4D ultrasound visualization of spatial anatomy during anatomical education. 108 first-year medical students were introduced to 3D/4D ultrasound imaging of the thyroid gland through a detailed 20-min tutorial taught in small group format. Students then practiced 3D/4D ultrasound imaging on volunteers and donor cadavers before assessment through acquisition and identification of thyroid gland on at least three instructor-verified images. A post-training survey was administered assessing student impression. All students visualized the thyroid gland using 3D/4D ultrasound. Students revealed 88.0% strongly agreed or agreed 3D/4D ultrasound is useful revealing the thyroid gland and surrounding structures and 87.0% rated the experience "Very Easy" or "Easy", demonstrating benefits and ease of use including 3D/4D ultrasound in anatomy courses. When asked, students felt 3D/4D ultrasound is useful in teaching the structure and surrounding anatomy of the thyroid gland, they overwhelmingly responded "Strongly Agree" or "Agree" (90.2%). This study revealed that 3D/4D ultrasound was successfully used and preferred over 2D ultrasound by medical students during anatomy dissection courses to accurately identify the thyroid gland. In addition, 3D/4D ultrasound may nurture and further reinforce stereostructural spatial relationships of the thyroid gland taught during anatomy dissection.

  10. NoSQL Based 3D City Model Management System

    NASA Astrophysics Data System (ADS)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  11. A new multimodal interactive way of subjective scoring of 3D video quality of experience

    NASA Astrophysics Data System (ADS)

    Kim, Taewan; Lee, Kwanghyun; Lee, Sanghoon; Bovik, Alan C.

    2014-03-01

    People that watch today's 3D visual programs, such as 3D cinema, 3D TV and 3D games, experience wide and dynamically varying ranges of 3D visual immersion and 3D quality of experience (QoE). It is necessary to be able to deploy reliable methodologies that measure each viewers subjective experience. We propose a new methodology that we call Multimodal Interactive Continuous Scoring of Quality (MICSQ). MICSQ is composed of a device interaction process between the 3D display and a separate device (PC, tablet, etc.) used as an assessment tool, and a human interaction process between the subject(s) and the device. The scoring process is multimodal, using aural and tactile cues to help engage and focus the subject(s) on their tasks. Moreover, the wireless device interaction process makes it possible for multiple subjects to assess 3D QoE simultaneously in a large space such as a movie theater, and at di®erent visual angles and distances.

  12. HyFinBall: A Two-Handed, Hybrid 2D/3D Desktop VR Interface for Visualization

    DTIC Science & Technology

    2013-01-01

    user study . This is done in the context of a rich, visual analytics interface containing coordinated views with 2D and 3D visualizations and...the user interface (hardware and software), the design space, as well as preliminary results of a formal user study . This is done in the context of a ... virtual reality , user interface , two-handed interface , hybrid user interface , multi-touch, gesture,

  13. STRING 3: An Advanced Groundwater Flow Visualization Tool

    NASA Astrophysics Data System (ADS)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of neighboring faces is extracted. Similar algorithms help to find the 2D boundary of cuts through the 3D model. As interactivity plays a big role for an exploration tool the speed of the drawing routines is also important. To achieve this, different pathlet rendering solutions have been developed and benchmarked. These provide a trade-off between the usage of geometry and fragment shaders. We show that point sprite shaders have superior performance and visual quality over geometry-based approaches. Admittedly, the point sprite-based approach has many non-trivial problems of joining the different parts of the pathlet geometry. This research is funded by the Federal Ministry for Economic Affairs and Energy (Germany). [1] T. Seidel, C. König, M. Schäfer, I. Ostermann, T. Biedert, D. Hietel (2014). Intuitive visualization of transient groundwater flow. Computers & Geosciences, Vol. 67, pp. 173-179 [2] I. Michel, S. Schröder, T. Seidel, C. König (2015). Intuitive Visualization of Transient Flow: Towards a Full 3D Tool. Geophysical Research Abstracts, Vol. 17, EGU2015-1670 [3] S. Schröder, I. Michel, T. Seidel, C.M. König (2015). STRING 3: Full 3D visualization of groundwater Flow. In Proceedings of IAMG 2015 Freiberg, pp. 813-822

  14. 3d visualization of atomistic simulations on every desktop

    NASA Astrophysics Data System (ADS)

    Peled, Dan; Silverman, Amihai; Adler, Joan

    2013-08-01

    Once upon a time, after making simulations, one had to go to a visualization center with fancy SGI machines to run a GL visualization and make a movie. More recently, OpenGL and its mesa clone have let us create 3D on simple desktops (or laptops), whether or not a Z-buffer card is present. Today, 3D a la Avatar is a commodity technique, presented in cinemas and sold for home TV. However, only a few special research centers have systems large enough for entire classes to view 3D, or special immersive facilities like visualization CAVEs or walls, and not everyone finds 3D immersion easy to view. For maximum physics with minimum effort a 3D system must come to each researcher and student. So how do we create 3D visualization cheaply on every desktop for atomistic simulations? After several months of attempts to select commodity equipment for a whole room system, we selected an approach that goes back a long time, even predating GL. The old concept of anaglyphic stereo relies on two images, slightly displaced, and viewed through colored glasses, or two squares of cellophane from a regular screen/projector or poster. We have added this capability to our AViz atomistic visualization code in its new, 6.1 version, which is RedHat, CentOS and Ubuntu compatible. Examples using data from our own research and that of other groups will be given.

  15. Aurally aided visual search performance in a dynamic environment

    NASA Astrophysics Data System (ADS)

    McIntire, John P.; Havig, Paul R.; Watamaniuk, Scott N. J.; Gilkey, Robert H.

    2008-04-01

    Previous research has repeatedly shown that people can find a visual target significantly faster if spatial (3D) auditory displays direct attention to the corresponding spatial location. However, previous research has only examined searches for static (non-moving) targets in static visual environments. Since motion has been shown to affect visual acuity, auditory acuity, and visual search performance, it is important to characterize aurally-aided search performance in environments that contain dynamic (moving) stimuli. In the present study, visual search performance in both static and dynamic environments is investigated with and without 3D auditory cues. Eight participants searched for a single visual target hidden among 15 distracting stimuli. In the baseline audio condition, no auditory cues were provided. In the 3D audio condition, a virtual 3D sound cue originated from the same spatial location as the target. In the static search condition, the target and distractors did not move. In the dynamic search condition, all stimuli moved on various trajectories at 10 deg/s. The results showed a clear benefit of 3D audio that was present in both static and dynamic environments, suggesting that spatial auditory displays continue to be an attractive option for a variety of aircraft, motor vehicle, and command & control applications.

  16. Spatial development of transport structures in apple (Malus × domestica Borkh.) fruit

    PubMed Central

    Herremans, Els; Verboven, Pieter; Hertog, Maarten L. A. T. M.; Cantre, Dennis; van Dael, Mattias; De Schryver, Thomas; Van Hoorebeke, Luc; Nicolaï, Bart M.

    2015-01-01

    The void network and vascular system are important pathways for the transport of gases, water and solutes in apple fruit (Malus × domestica Borkh). Here we used X-ray micro-tomography at various spatial resolutions to investigate the growth of these transport structures in 3D during fruit development of “Jonagold” apple. The size of the void space and porosity in the cortex tissue increased considerably. In the core tissue, the porosity was consistently lower, and seemed to decrease toward the end of the maturation period. The voids in the core were more narrow and fragmented than the voids in the cortex. Both the void network in the core and in the cortex changed significantly in terms of void morphology. An automated segmentation protocol underestimated the total vasculature length by 9–12% in comparison to manually processed images. Vascular networks increased in length from a total of 5 m at 9 weeks after full bloom, to more than 20 m corresponding to 5 cm of vascular tissue per cubic centimeter of apple tissue. A high degree of branching in both the void network and vascular system and a complex three-dimensional pattern was observed across the whole fruit. The 3D visualizations of the transport structures may be useful for numerical modeling of organ growth and transport processes in fruit. PMID:26388883

  17. Investigation Of Integrating Three-Dimensional (3-D) Geometry Into The Visual Anatomical Injury Descriptor (Visual AID) Using WebGL

    DTIC Science & Technology

    2011-08-01

    generated using the Zygote Human Anatomy 3-D model (3). Use of a reference anatomy independent of personal identification, such as Zygote, allows Visual...Zygote Human Anatomy 3D Model, 2010. http://www.zygote.com/ (accessed July 26, 2011). 4. Khronos Group Web site. Khronos to Create New Open Standard for...understanding of the information at hand. In order to fulfill the medical illustration track, I completed a concentration in science, focusing on human

  18. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-01-01

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  19. Using 3D visualization and seismic attributes to improve structural and stratigraphic resolution of reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerr, J.; Jones, G.L.

    1996-12-31

    Recent advances in hardware and software have given the interpreter and engineer new ways to view 3D seismic data and well bore information. Recent papers have also highlighted the use of various statistics and seismic attributes. By combining new 3D rendering technologies with recent trends in seismic analysis, the interpreter can improve the structural and stratigraphic resolution of hydrocarbon reservoirs. This paper gives several examples using 3D visualization to better define both the structural and stratigraphic aspects of several different structural types from around the world. Statistics, 3D visualization techniques and rapid animation are used to show complex faulting andmore » detailed channel systems. These systems would be difficult to map using either 2D or 3D data with conventional interpretation techniques.« less

  20. Sandia MEMS Visualization Tools v. 3.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yarberry, Victor; Jorgensen, Craig R.; Young, Andrew I.

    This is a revision to the Sandia MEMS Visualization Tools. It replaces all previous versions. New features in this version: Support for AutoCAD 2014 and 2015 . This CD contains an integrated set of electronic files that: a) Provides a 2D Process Visualizer that generates cross-section images of devices constructed using the SUMMiT V fabrication process. b) Provides a 3D Visualizer that generates 3D images of devices constructed using the SUMMiT V fabrication process. c) Provides a MEMS 3D Model generator that creates 3D solid models of devices constructed using the SUMMiT V fabrication process. While there exists some filesmore » on the CD that are used in conjunction with software package AutoCAD , these files are not intended for use independent of the CD. Note that the customer must purchase his/her own copy of AutoCAD to use with these files.« less

  1. Visualizing UAS-collected imagery using augmented reality

    NASA Astrophysics Data System (ADS)

    Conover, Damon M.; Beidleman, Brittany; McAlinden, Ryan; Borel-Donohue, Christoph C.

    2017-05-01

    One of the areas where augmented reality will have an impact is in the visualization of 3-D data. 3-D data has traditionally been viewed on a 2-D screen, which has limited its utility. Augmented reality head-mounted displays, such as the Microsoft HoloLens, make it possible to view 3-D data overlaid on the real world. This allows a user to view and interact with the data in ways similar to how they would interact with a physical 3-D object, such as moving, rotating, or walking around it. A type of 3-D data that is particularly useful for military applications is geo-specific 3-D terrain data, and the visualization of this data is critical for training, mission planning, intelligence, and improved situational awareness. Advances in Unmanned Aerial Systems (UAS), photogrammetry software, and rendering hardware have drastically reduced the technological and financial obstacles in collecting aerial imagery and in generating 3-D terrain maps from that imagery. Because of this, there is an increased need to develop new tools for the exploitation of 3-D data. We will demonstrate how the HoloLens can be used as a tool for visualizing 3-D terrain data. We will describe: 1) how UAScollected imagery is used to create 3-D terrain maps, 2) how those maps are deployed to the HoloLens, 3) how a user can view and manipulate the maps, and 4) how multiple users can view the same virtual 3-D object at the same time.

  2. Stereoscopic applications for design visualization

    NASA Astrophysics Data System (ADS)

    Gilson, Kevin J.

    2007-02-01

    Advances in display technology and 3D design visualization applications have made real-time stereoscopic visualization of architectural and engineering projects a reality. Parsons Brinkerhoff (PB) is a transportation consulting firm that has used digital visualization tools from their inception and has helped pioneer the application of those tools to large scale infrastructure projects. PB is one of the first Architecture/Engineering/Construction (AEC) firms to implement a CAVE- an immersive presentation environment that includes stereoscopic rear-projection capability. The firm also employs a portable stereoscopic front-projection system, and shutter-glass systems for smaller groups. PB is using commercial real-time 3D applications in combination with traditional 3D modeling programs to visualize and present large AEC projects to planners, clients and decision makers in stereo. These presentations create more immersive and spatially realistic presentations of the proposed designs. This paper will present the basic display tools and applications, and the 3D modeling techniques PB is using to produce interactive stereoscopic content. The paper will discuss several architectural and engineering design visualizations we have produced.

  3. Molecular Dynamics Visualization (MDV): Stereoscopic 3D Display of Biomolecular Structure and Interactions Using the Unity Game Engine.

    PubMed

    Wiebrands, Michael; Malajczuk, Chris J; Woods, Andrew J; Rohl, Andrew L; Mancera, Ricardo L

    2018-06-21

    Molecular graphics systems are visualization tools which, upon integration into a 3D immersive environment, provide a unique virtual reality experience for research and teaching of biomolecular structure, function and interactions. We have developed a molecular structure and dynamics application, the Molecular Dynamics Visualization tool, that uses the Unity game engine combined with large scale, multi-user, stereoscopic visualization systems to deliver an immersive display experience, particularly with a large cylindrical projection display. The application is structured to separate the biomolecular modeling and visualization systems. The biomolecular model loading and analysis system was developed as a stand-alone C# library and provides the foundation for the custom visualization system built in Unity. All visual models displayed within the tool are generated using Unity-based procedural mesh building routines. A 3D user interface was built to allow seamless dynamic interaction with the model while being viewed in 3D space. Biomolecular structure analysis and display capabilities are exemplified with a range of complex systems involving cell membranes, protein folding and lipid droplets.

  4. U.S. Geological Survey: A synopsis of Three-dimensional Modeling

    USGS Publications Warehouse

    Jacobsen, Linda J.; Glynn, Pierre D.; Phelps, Geoff A.; Orndorff, Randall C.; Bawden, Gerald W.; Grauch, V.J.S.

    2011-01-01

    The U.S. Geological Survey (USGS) is a multidisciplinary agency that provides assessments of natural resources (geological, hydrological, biological), the disturbances that affect those resources, and the disturbances that affect the built environment, natural landscapes, and human society. Until now, USGS map products have been generated and distributed primarily as 2-D maps, occasionally providing cross sections or overlays, but rarely allowing the ability to characterize and understand 3-D systems, how they change over time (4-D), and how they interact. And yet, technological advances in monitoring natural resources and the environment, the ever-increasing diversity of information needed for holistic assessments, and the intrinsic 3-D/4-D nature of the information obtained increases our need to generate, verify, analyze, interpret, confirm, store, and distribute its scientific information and products using 3-D/4-D visualization, analysis, modeling tools, and information frameworks. Today, USGS scientists use 3-D/4-D tools to (1) visualize and interpret geological information, (2) verify the data, and (3) verify their interpretations and models. 3-D/4-D visualization can be a powerful quality control tool in the analysis of large, multidimensional data sets. USGS scientists use 3-D/4-D technology for 3-D surface (i.e., 2.5-D) visualization as well as for 3-D volumetric analyses. Examples of geological mapping in 3-D include characterization of the subsurface for resource assessments, such as aquifer characterization in the central United States, and for input into process models, such as seismic hazards in the western United States.

  5. An image processing and analysis tool for identifying and analysing complex plant root systems in 3D soil using non-destructive analysis: Root1.

    PubMed

    Flavel, Richard J; Guppy, Chris N; Rabbi, Sheikh M R; Young, Iain M

    2017-01-01

    The objective of this study was to develop a flexible and free image processing and analysis solution, based on the Public Domain ImageJ platform, for the segmentation and analysis of complex biological plant root systems in soil from x-ray tomography 3D images. Contrasting root architectures from wheat, barley and chickpea root systems were grown in soil and scanned using a high resolution micro-tomography system. A macro (Root1) was developed that reliably identified with good to high accuracy complex root systems (10% overestimation for chickpea, 1% underestimation for wheat, 8% underestimation for barley) and provided analysis of root length and angle. In-built flexibility allowed the user interaction to (a) amend any aspect of the macro to account for specific user preferences, and (b) take account of computational limitations of the platform. The platform is free, flexible and accurate in analysing root system metrics.

  6. Restoring Fort Frontenac in 3D: Effective Usage of 3D Technology for Heritage Visualization

    NASA Astrophysics Data System (ADS)

    Yabe, M.; Goins, E.; Jackson, C.; Halbstein, D.; Foster, S.; Bazely, S.

    2015-02-01

    This paper is composed of three elements: 3D modeling, web design, and heritage visualization. The aim is to use computer graphics design to inform and create an interest in historical visualization by rebuilding Fort Frontenac using 3D modeling and interactive design. The final model will be integr ated into an interactive website to learn more about the fort's historic imp ortance. It is apparent that using computer graphics can save time and money when it comes to historical visualization. Visitors do not have to travel to the actual archaeological buildings. They can simply use the Web in their own home to learn about this information virtually. Meticulously following historical records to create a sophisticated restoration of archaeological buildings will draw viewers into visualizations, such as the historical world of Fort Frontenac. As a result, it allows the viewers to effectively understand the fort's social sy stem, habits, and historical events.

  7. Success in Opposite Direction: Strategic Culture and the French Experience in Indochina, the Suez, and Algeria, 1945-1962

    DTIC Science & Technology

    2015-05-21

    5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) MAJ Coley D. Tyler 5d. PROJECT NUMBER 5e. TASK NUMBER 5f...American political scientist Jack Snyder introduced strategic culture in 1977 while trying to explain the differences in Soviet and American nuclear...Strategic Cultures Curriculum Project (McLean, VA: SAIC, 2006), 3. 3 how belligerents could act in a crisis.9 The US Army cannot underestimate the

  8. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    NASA Astrophysics Data System (ADS)

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  9. Behaviour of DFT-based approaches to the spin-orbit term of zero-field splitting tensors: a case study of metallocomplexes, MIII(acac)3 (M = V, Cr, Mn, Fe and Mo).

    PubMed

    Sugisaki, Kenji; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2017-11-15

    Spin-orbit contributions to the zero-field splitting (ZFS) tensor (D SO tensor) of M III (acac) 3 complexes (M = V, Cr, Mn, Fe and Mo; acac = acetylacetonate anion) are evaluated by means of ab initio (a hybrid CASSCF/MRMP2) and DFT (Pederson-Khanna (PK) and natural orbital-based Pederson-Khanna (NOB-PK)) methods, focusing on the behaviour of DFT-based approaches to the D SO tensors against the valence d-electron configurations of the transition metal ions in octahedral coordination. Both the DFT-based approaches reproduce trends in the D tensors. Significantly, the differences between the theoretical and experimental D (D = D ZZ - (D XX + D YY )/2) values are smaller in NOB-PK than in PK, emphasising the usefulness of the natural orbital-based approach to the D tensor calculations of transition metal ion complexes. In the case of d 2 and d 4 electronic configurations, the D SO (NOB-PK) values are considerably underestimated in the absolute magnitude, compared with the experimental ones. The D SO tensor analysis based on the orbital region partitioning technique (ORPT) revealed that the D SO contributions attributed to excitations from the singly occupied region (SOR) to the unoccupied region (UOR) are significantly underestimated in the DFT-based approaches to all the complexes under study. In the case of d 3 and d 5 configurations, the (SOR → UOR) excitations contribute in a nearly isotropic manner, which causes fortuitous error cancellations in the DFT-based D SO values. These results indicate that more efforts to develop DFT frameworks should be directed towards the reproduction of quantitative D SO tensors of transition metal complexes with various electronic configurations and local symmetries around metal ions.

  10. A Review on Stereoscopic 3D: Home Entertainment for the Twenty First Century

    NASA Astrophysics Data System (ADS)

    Karajeh, Huda; Maqableh, Mahmoud; Masa'deh, Ra'ed

    2014-12-01

    In the last few years, stereoscopic developed very rapidly and employed in many different fields such as entertainment. Due to the importance of entertainment aspect of stereoscopic 3D (S3D) applications, a review of the current state of S3D development in entertainment technology is conducted. In this paper, a novel survey of the stereoscopic entertainment aspects is presented by discussing the significant development of a 3D cinema, the major development of 3DTV, the issues related to 3D video content and 3D video games. Moreover, we reviewed some problems that can be caused in the viewers' visual system from watching stereoscopic contents. Some stereoscopic viewers are not satisfied as they are frustrated from wearing glasses, have visual fatigue, complain from unavailability of 3D contents, and/or complain from some sickness. Therefore, we will discuss stereoscopic visual discomfort and to what extend the viewer will have an eye fatigue while watching 3D contents or playing 3D games. The suggested solutions in the literature for this problem are discussed.

  11. Simulating Various Terrestrial and Uav LIDAR Scanning Configurations for Understory Forest Structure Modelling

    NASA Astrophysics Data System (ADS)

    Hämmerle, M.; Lukač, N.; Chen, K.-C.; Koma, Zs.; Wang, C.-K.; Anders, K.; Höfle, B.

    2017-09-01

    Information about the 3D structure of understory vegetation is of high relevance in forestry research and management (e.g., for complete biomass estimations). However, it has been hardly investigated systematically with state-of-the-art methods such as static terrestrial laser scanning (TLS) or laser scanning from unmanned aerial vehicle platforms (ULS). A prominent challenge for scanning forests is posed by occlusion, calling for proper TLS scan position or ULS flight line configurations in order to achieve an accurate representation of understory vegetation. The aim of our study is to examine the effect of TLS or ULS scanning strategies on (1) the height of individual understory trees and (2) understory canopy height raster models. We simulate full-waveform TLS and ULS point clouds of a virtual forest plot captured from various combinations of max. 12 TLS scan positions or 3 ULS flight lines. The accuracy of the respective datasets is evaluated with reference values given by the virtually scanned 3D triangle mesh tree models. TLS tree height underestimations range up to 1.84 m (15.30 % of tree height) for single TLS scan positions, but combining three scan positions reduces the underestimation to maximum 0.31 m (2.41 %). Combining ULS flight lines also results in improved tree height representation, with a maximum underestimation of 0.24 m (2.15 %). The presented simulation approach offers a complementary source of information for efficient planning of field campaigns aiming at understory vegetation modelling.

  12. VISUAL3D - An EIT network on visualization of geomodels

    NASA Astrophysics Data System (ADS)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan Universität Leoben, Slovenian National Building and Civil Engineering Institute, Tallinn University of Technology and Turku University. The infrastructure within the network comprises different types of capturing and visualization hardware, ranging from high resolution cubes, VR walls, VR goggle solutions, high resolution photogrammetry, UAVs, lidar-scanners, and many more.

  13. A web-based solution for 3D medical image visualization

    NASA Astrophysics Data System (ADS)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  14. AntigenMap 3D: an online antigenic cartography resource.

    PubMed

    Barnett, J Lamar; Yang, Jialiang; Cai, Zhipeng; Zhang, Tong; Wan, Xiu-Feng

    2012-05-01

    Antigenic cartography is a useful technique to visualize and minimize errors in immunological data by projecting antigens to 2D or 3D cartography. However, a 2D cartography may not be sufficient to capture the antigenic relationship from high-dimensional immunological data. AntigenMap 3D presents an online, interactive, and robust 3D antigenic cartography construction and visualization resource. AntigenMap 3D can be applied to identify antigenic variants and vaccine strain candidates for pathogens with rapid antigenic variations, such as influenza A virus. http://sysbio.cvm.msstate.edu/AntigenMap3D

  15. Automatic visualization of 3D geometry contained in online databases

    NASA Astrophysics Data System (ADS)

    Zhang, Jie; John, Nigel W.

    2003-04-01

    In this paper, the application of the Virtual Reality Modeling Language (VRML) for efficient database visualization is analyzed. With the help of JAVA programming, three examples of automatic visualization from a database containing 3-D Geometry are given. The first example is used to create basic geometries. The second example is used to create cylinders with a defined start point and end point. The third example is used to processs data from an old copper mine complex in Cheshire, United Kingdom. Interactive 3-D visualization of all geometric data in an online database is achieved with JSP technology.

  16. Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data

    PubMed Central

    Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.

    2005-01-01

    The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787

  17. An annotation system for 3D fluid flow visualization

    NASA Technical Reports Server (NTRS)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  18. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    NASA Astrophysics Data System (ADS)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool. Currently STRING can generate animations of single 2D cuts, either planar or curved surfaces, through 3D simulation domains. To provide a general tool for experts enabling also direct exploration and analysis of large 3D flow fields the software needs to be extended to intuitive as well as interactive visualizations of entire 3D flow domains. The current research concerning this project, which is funded by the Federal Ministry for Economic Affairs and Energy (Germany), is presented.

  19. Denoising and 4D visualization of OCT images

    PubMed Central

    Gargesha, Madhusudhana; Jenkins, Michael W.; Rollins, Andrew M.; Wilson, David L.

    2009-01-01

    We are using Optical Coherence Tomography (OCT) to image structure and function of the developing embryonic heart in avian models. Fast OCT imaging produces very large 3D (2D + time) and 4D (3D volumes + time) data sets, which greatly challenge ones ability to visualize results. Noise in OCT images poses additional challenges. We created an algorithm with a quick, data set specific optimization for reduction of both shot and speckle noise and applied it to 3D visualization and image segmentation in OCT. When compared to baseline algorithms (median, Wiener, orthogonal wavelet, basic non-orthogonal wavelet), a panel of experts judged the new algorithm to give much improved volume renderings concerning both noise and 3D visualization. Specifically, the algorithm provided a better visualization of the myocardial and endocardial surfaces, and the interaction of the embryonic heart tube with surrounding tissue. Quantitative evaluation using an image quality figure of merit also indicated superiority of the new algorithm. Noise reduction aided semi-automatic 2D image segmentation, as quantitatively evaluated using a contour distance measure with respect to an expert segmented contour. In conclusion, the noise reduction algorithm should be quite useful for visualization and quantitative measurements (e.g., heart volume, stroke volume, contraction velocity, etc.) in OCT embryo images. With its semi-automatic, data set specific optimization, we believe that the algorithm can be applied to OCT images from other applications. PMID:18679509

  20. Creating Physical 3D Stereolithograph Models of Brain and Skull

    PubMed Central

    Kelley, Daniel J.; Farhoud, Mohammed; Meyerand, M. Elizabeth; Nelson, David L.; Ramirez, Lincoln F.; Dempsey, Robert J.; Wolf, Alan J.; Alexander, Andrew L.; Davidson, Richard J.

    2007-01-01

    The human brain and skull are three dimensional (3D) anatomical structures with complex surfaces. However, medical images are often two dimensional (2D) and provide incomplete visualization of structural morphology. To overcome this loss in dimension, we developed and validated a freely available, semi-automated pathway to build 3D virtual reality (VR) and hand-held, stereolithograph models. To evaluate whether surface visualization in 3D was more informative than in 2D, undergraduate students (n = 50) used the Gillespie scale to rate 3D VR and physical models of both a living patient-volunteer's brain and the skull of Phineas Gage, a historically famous railroad worker whose misfortune with a projectile tamping iron provided the first evidence of a structure-function relationship in brain. Using our processing pathway, we successfully fabricated human brain and skull replicas and validated that the stereolithograph model preserved the scale of the VR model. Based on the Gillespie ratings, students indicated that the biological utility and quality of visual information at the surface of VR and stereolithograph models were greater than the 2D images from which they were derived. The method we developed is useful to create VR and stereolithograph 3D models from medical images and can be used to model hard or soft tissue in living or preserved specimens. Compared to 2D images, VR and stereolithograph models provide an extra dimension that enhances both the quality of visual information and utility of surface visualization in neuroscience and medicine. PMID:17971879

  1. Quantitation of specific binding ratio in 123I-FP-CIT SPECT: accurate processing strategy for cerebral ventricular enlargement with use of 3D-striatal digital brain phantom.

    PubMed

    Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru

    2018-06-01

    This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.

  2. Are There Side Effects to Watching 3D Movies? A Prospective Crossover Observational Study on Visually Induced Motion Sickness

    PubMed Central

    Solimini, Angelo G.

    2013-01-01

    Background The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. Methods and Findings A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Conclusions Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators. PMID:23418530

  3. Are there side effects to watching 3D movies? A prospective crossover observational study on visually induced motion sickness.

    PubMed

    Solimini, Angelo G

    2013-01-01

    The increasing popularity of commercial movies showing three dimensional (3D) images has raised concern about possible adverse side effects on viewers. A prospective carryover observational study was designed to assess the effect of exposure (3D vs. 2D movie views) on self reported symptoms of visually induced motion sickness. The standardized Simulator Sickness Questionnaire (SSQ) was self administered on a convenience sample of 497 healthy adult volunteers before and after the vision of 2D and 3D movies. Viewers reporting some sickness (SSQ total score>15) were 54.8% of the total sample after the 3D movie compared to 14.1% of total sample after the 2D movie. Symptom intensity was 8.8 times higher than baseline after exposure to 3D movie (compared to the increase of 2 times the baseline after the 2D movie). Multivariate modeling of visually induced motion sickness as response variables pointed out the significant effects of exposure to 3D movie, history of car sickness and headache, after adjusting for gender, age, self reported anxiety level, attention to the movie and show time. Seeing 3D movies can increase rating of symptoms of nausea, oculomotor and disorientation, especially in women with susceptible visual-vestibular system. Confirmatory studies which include examination of clinical signs on viewers are needed to pursue a conclusive evidence on the 3D vision effects on spectators.

  4. The Role of Research Institutions in Building Visual Content for the Geowall

    NASA Astrophysics Data System (ADS)

    Newman, R. L.; Kilb, D.; Nayak, A.; Kent, G.

    2003-12-01

    The advent of the low-cost Geowall (http://www.geowall.org) allows researchers and students to study 3-D geophysical datasets in a collaborative setting. Although 3-D visual objects can aid the understanding of geological principles in the classroom, it is often difficult for staff to develop their own custom visual objects. This is a fundamentally important aspect that research institutions that store large (terabyte) geophysical datasets can address. At Scripps Institution of Oceanography (SIO) we regularly explore gigabyte 3-D visual objects in the SIO Visualization Center (http://siovizcenter.ucsd.edu). Exporting these datasets for use with the Geowall has become routine with current software applications such as IVS's Fledermaus and iView3D. We have developed visualizations that incorporate topographic, bathymetric, and 3-D volumetric crustal datasets to demonstrate fundamental principles of earth science including plate tectonics, seismology, sea-level change, and neotectonics. These visualizations are available for download either via FTP or a website, and have been incorporated into graduate and undergraduate classes at both SIO and the University of California, San Diego. Additionally, staff at the Visualization Center develop content for external schools and colleges such as the Preuss School, a local middle/high school, where a Geowall was installed in February 2003 and curriculum developed for 8th grade students. We have also developed custom visual objects for researchers and educators at diverse education institutions across the globe. At SIO we encourage graduate students and researchers alike to develop visual objects of their datasets through innovative classes and competitions. This not only assists the researchers themselves in understanding their data but also increases the number of visual objects freely available to geoscience educators worldwide.

  5. First seizure while driving (FSWD)--an underestimated phenomenon?

    PubMed

    Pohlmann-Eden, Bernd; Hynick, Nina; Legg, Karen

    2013-07-01

    Seizures while driving are a well known occurrence in established epilepsy and have significant impact on driving privileges. There is no data available on patients who experience their first (diagnosed) seizure while driving (FSWD). Out of 311 patients presenting to the Halifax First Seizure Clinic between 2008 and 2011, 158 patients met the criteria of a first seizure (FS) or drug-naïve, newly diagnosed epilepsy (NDE). A retrospective chart review was conducted. FSWD was evaluated for 1) prevalence, 2) clinical presentation, 3) coping strategies, and 4) length of time driving before seizure occurrence. The prevalence of FSWD was 8.2%. All 13 patients experienced impaired consciousness. Eleven patients had generalized tonic-clonic seizures, one starting with a déjà-vu evolving to visual aura and a complex partial seizure; three directly from visual auras. Two patients had complex partial seizures, one starting with an autonomic seizure. In response to their seizure, patients reported they were i) able to actively stop the car (n=4, three had visual auras), ii) not able to stop the car resulting in accident (n=7), or iii) passenger was able to pull the car over (n=2). One accident was fatal to the other party. Twelve out of 13 patients had been driving for less than one hour. FSWD is frequent and possibly underrecognized. FSWD often lead to accidents, which occur less if preceded by simple partial seizures. Pathophysiological mechanisms remain uncertain; it is still speculative if complex visuo-motor tasks required while driving play a role in this scenario.

  6. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery

    PubMed Central

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10–12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant’s MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation. PMID:26347642

  7. 3D visualization of movements can amplify motor cortex activation during subsequent motor imagery.

    PubMed

    Sollfrank, Teresa; Hart, Daniel; Goodsell, Rachel; Foster, Jonathan; Tan, Tele

    2015-01-01

    A repetitive movement practice by motor imagery (MI) can influence motor cortical excitability in the electroencephalogram (EEG). This study investigated if a realistic visualization in 3D of upper and lower limb movements can amplify motor related potentials during subsequent MI. We hypothesized that a richer sensory visualization might be more effective during instrumental conditioning, resulting in a more pronounced event related desynchronization (ERD) of the upper alpha band (10-12 Hz) over the sensorimotor cortices thereby potentially improving MI based brain-computer interface (BCI) protocols for motor rehabilitation. The results show a strong increase of the characteristic patterns of ERD of the upper alpha band components for left and right limb MI present over the sensorimotor areas in both visualization conditions. Overall, significant differences were observed as a function of visualization modality (VM; 2D vs. 3D). The largest upper alpha band power decrease was obtained during MI after a 3-dimensional visualization. In total in 12 out of 20 tasks the end-user of the 3D visualization group showed an enhanced upper alpha ERD relative to 2D VM group, with statistical significance in nine tasks.With a realistic visualization of the limb movements, we tried to increase motor cortex activation during subsequent MI. The feedback and the feedback environment should be inherently motivating and relevant for the learner and should have an appeal of novelty, real-world relevance or aesthetic value (Ryan and Deci, 2000; Merrill, 2007). Realistic visual feedback, consistent with the participant's MI, might be helpful for accomplishing successful MI and the use of such feedback may assist in making BCI a more natural interface for MI based BCI rehabilitation.

  8. Human microbiome visualization using 3D technology.

    PubMed

    Moore, Jason H; Lari, Richard Cowper Sal; Hill, Douglas; Hibberd, Patricia L; Madan, Juliette C

    2011-01-01

    High-throughput sequencing technology has opened the door to the study of the human microbiome and its relationship with health and disease. This is both an opportunity and a significant biocomputing challenge. We present here a 3D visualization methodology and freely-available software package for facilitating the exploration and analysis of high-dimensional human microbiome data. Our visualization approach harnesses the power of commercial video game development engines to provide an interactive medium in the form of a 3D heat map for exploration of microbial species and their relative abundance in different patients. The advantage of this approach is that the third dimension provides additional layers of information that cannot be visualized using a traditional 2D heat map. We demonstrate the usefulness of this visualization approach using microbiome data collected from a sample of premature babies with and without sepsis.

  9. The effects of 3D interactive animated graphics on student learning and attitudes in computer-based instruction

    NASA Astrophysics Data System (ADS)

    Moon, Hye Sun

    Visuals are most extensively used as instructional tools in education to present spatially-based information. Recent computer technology allows the generation of 3D animated visuals to extend the presentation in computer-based instruction. Animated visuals in 3D representation not only possess motivational value that promotes positive attitudes toward instruction but also facilitate learning when the subject matter requires dynamic motion and 3D visual cue. In this study, three questions are explored: (1) how 3D graphics affects student learning and attitude, in comparison with 2D graphics; (2) how animated graphics affects student learning and attitude, in comparison with static graphics; and (3) whether the use of 3D graphics, when they are supported by interactive animation, is the most effective visual cues to improve learning and to develop positive attitudes. A total of 145 eighth-grade students participated in a 2 x 2 factorial design study. The subjects were randomly assigned to one of four computer-based instructions: 2D static; 2D animated; 3D static; and 3D animated. The results indicated that: (1) Students in the 3D graphic condition exhibited more positive attitudes toward instruction than those in the 2D graphic condition. No group differences were found between the posttest score of 3D graphic condition and that of 2D graphic condition. However, students in the 3D graphic condition took less time for information retrieval on posttest than those in the 2D graphic condition. (2) Students in the animated graphic condition exhibited slightly more positive attitudes toward instruction than those in the static graphic condition. No group differences were found between the posttest score of animated graphic condition and that of static graphic condition. However, students in the animated graphic condition took less time for information retrieval on posttest than those in the static graphic condition. (3) Students in the 3D animated graphic condition exhibited more positive attitudes toward instruction than those in other treatment conditions (2D static, 2D animated, and 3D static conditions). No group differences were found in the posttest scores among four treatment conditions. However, students in the 3D animated condition took less time for information retrieval on posttest than those in other treatment conditions.

  10. Visualizing topography: Effects of presentation strategy, gender, and spatial ability

    NASA Astrophysics Data System (ADS)

    McAuliffe, Carla

    2003-10-01

    This study investigated the effect of different presentation strategies (2-D static visuals, 3-D animated visuals, and 3-D interactive, animated visuals) and gender on achievement, time-spent-on visual treatment, and attitude during a computer-based science lesson about reading and interpreting topographic maps. The study also examined the relationship of spatial ability and prior knowledge to gender, achievement, and time-spent-on visual treatment. Students enrolled in high school chemistry-physics were pretested and given two spatial ability tests. They were blocked by gender and randomly assigned to one of three levels of presentation strategy or the control group. After controlling for the effects of spatial ability and prior knowledge with analysis of covariance, three significant differences were found between the versions: (a) the 2-D static treatment group scored significantly higher on the posttest than the control group; (b) the 3-D animated treatment group scored significantly higher on the posttest than the control group; and (c) the 2-D static treatment group scored significantly higher on the posttest than the 3-D interactive animated treatment group. Furthermore, the 3-D interactive animated treatment group spent significantly more time on the visual screens than the 2-D static treatment group. Analyses of student attitudes revealed that most students felt the landform visuals in the computer-based program helped them learn, but not in a way they would describe as fun. Significant differences in attitude were found by treatment and by gender. In contrast to findings from other studies, no gender differences were found on either of the two spatial tests given in this study. Cognitive load, cognitive involvement, and solution strategy are offered as three key factors that may help explain the results of this study. Implications for instructional design include suggestions about the use of 2-D static, 3-D animated and 3-D interactive animations as well as a recommendation about the inclusion of pretests in similar instructional programs. Areas for future research include investigating the effects of combinations of presentation strategies, continuing to examine the role of spatial ability in science achievement, and gaining cognitive insights about what it is that students do when learning to read and interpret topographic maps.

  11. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    PubMed

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  12. Comparative analysis of visual outcomes with 4 intraocular lenses: Monofocal, multifocal, and extended range of vision.

    PubMed

    Pedrotti, Emilio; Carones, Francesco; Aiello, Francesco; Mastropasqua, Rodolfo; Bruni, Enrico; Bonacci, Erika; Talli, Pietro; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio

    2018-02-01

    To compare the visual acuity, refractive outcomes, and quality of vision in patients with bilateral implantation of 4 intraocular lenses (IOLs). Department of Neurosciences, Biomedicine and Movement Sciences, Eye Clinic, University of Verona, Verona, and Carones Ophthalmology Center, Milano, Italy. Prospective case series. The study included patients who had bilateral cataract surgery with the implantation of 1 of 4 IOLs as follows: Tecnis 1-piece monofocal (monofocal IOL), Tecnis Symfony extended range of vision (extended-range-of-vision IOL), Restor +2.5 diopter (D) (+2.5 D multifocal IOL), and Restor +3.0 D (+3.0 D multifocal IOL). Visual acuity, refractive outcome, defocus curve, objective optical quality, contrast sensitivity, spectacle independence, and glare perception were evaluated 6 months after surgery. The study comprised 185 patients. The extended-range-of-vision IOL (55 patients) showed better distance visual outcomes than the monofocal IOL (30 patients) and high-addition apodized diffractive-refractive multifocal IOLs (P ≤ .002). The +3.0 D multifocal IOL (50 patients) showed the best near visual outcomes (P < .001). The +2.5 D multifocal IOL (50 patients) and extended-range-of-vision IOL provided significantly better intermediate visual outcomes than the other 2 IOLs, with significantly better vision for a defocus level of -1.5 D (P < .001). Better spectacle independence was shown for the +2.5 D multifocal IOL and extended-range-of-vision IOL (P < .001). The extended-range-of-vision IOL and +2.5 D multifocal IOL provided significantly better intermediate visual restoration after cataract surgery than the monofocal IOL and +3.0 D multifocal IOL, with significantly better quality of vision with the extended-range-of-vision IOL. Copyright © 2018 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  13. Scalable Multi-Platform Distribution of Spatial 3d Contents

    NASA Astrophysics Data System (ADS)

    Klimke, J.; Hagedorn, B.; Döllner, J.

    2013-09-01

    Virtual 3D city models provide powerful user interfaces for communication of 2D and 3D geoinformation. Providing high quality visualization of massive 3D geoinformation in a scalable, fast, and cost efficient manner is still a challenging task. Especially for mobile and web-based system environments, software and hardware configurations of target systems differ significantly. This makes it hard to provide fast, visually appealing renderings of 3D data throughout a variety of platforms and devices. Current mobile or web-based solutions for 3D visualization usually require raw 3D scene data such as triangle meshes together with textures delivered from server to client, what makes them strongly limited in terms of size and complexity of the models they can handle. In this paper, we introduce a new approach for provisioning of massive, virtual 3D city models on different platforms namely web browsers, smartphones or tablets, by means of an interactive map assembled from artificial oblique image tiles. The key concept is to synthesize such images of a virtual 3D city model by a 3D rendering service in a preprocessing step. This service encapsulates model handling and 3D rendering techniques for high quality visualization of massive 3D models. By generating image tiles using this service, the 3D rendering process is shifted from the client side, which provides major advantages: (a) The complexity of the 3D city model data is decoupled from data transfer complexity (b) the implementation of client applications is simplified significantly as 3D rendering is encapsulated on server side (c) 3D city models can be easily deployed for and used by a large number of concurrent users, leading to a high degree of scalability of the overall approach. All core 3D rendering techniques are performed on a dedicated 3D rendering server, and thin-client applications can be compactly implemented for various devices and platforms.

  14. Investigating the Use of 3d Geovisualizations for Urban Design in Informal Settlement Upgrading in South Africa

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Coetzee, S.; Çöltekin, A.

    2016-06-01

    Informal settlements are a common occurrence in South Africa, and to improve in-situ circumstances of communities living in informal settlements, upgrades and urban design processes are necessary. Spatial data and maps are essential throughout these processes to understand the current environment, plan new developments, and communicate the planned developments. All stakeholders need to understand maps to actively participate in the process. However, previous research demonstrated that map literacy was relatively low for many planning professionals in South Africa, which might hinder effective planning. Because 3D visualizations resemble the real environment more than traditional maps, many researchers posited that they would be easier to interpret. Thus, our goal is to investigate the effectiveness of 3D geovisualizations for urban design in informal settlement upgrading in South Africa. We consider all involved processes: 3D modelling, visualization design, and cognitive processes during map reading. We found that procedural modelling is a feasible alternative to time-consuming manual modelling, and can produce high quality models. When investigating the visualization design, the visual characteristics of 3D models and relevance of a subset of visual variables for urban design activities of informal settlement upgrades were qualitatively assessed. The results of three qualitative user experiments contributed to understanding the impact of various levels of complexity in 3D city models and map literacy of future geoinformatics and planning professionals when using 2D maps and 3D models. The research results can assist planners in designing suitable 3D models that can be used throughout all phases of the process.

  15. D Visualization of Volcanic Ash Dispersion Prediction with Spatial Information Open Platform in Korea

    NASA Astrophysics Data System (ADS)

    Youn, J.; Kim, T.

    2016-06-01

    Visualization of disaster dispersion prediction enables decision makers and civilian to prepare disaster and to reduce the damage by showing the realistic simulation results. With advances of GIS technology and the theory of volcanic disaster prediction algorithm, the predicted disaster dispersions are displayed in spatial information. However, most of volcanic ash dispersion predictions are displayed in 2D. 2D visualization has a limitation to understand the realistic dispersion prediction since its height could be presented only by colour. Especially for volcanic ash, 3D visualization of dispersion prediction is essential since it could bring out big aircraft accident. In this paper, we deals with 3D visualization techniques of volcanic ash dispersion prediction with spatial information open platform in Korea. First, time-series volcanic ash 3D position and concentrations are calculated with WRF (Weather Research and Forecasting) model and Modified Fall3D algorithm. For 3D visualization, we propose three techniques; those are 'Cube in the air', 'Cube in the cube', and 'Semi-transparent plane in the air' methods. In the 'Cube in the Air', which locates the semitransparent cubes having different color depends on its particle concentration. Big cube is not realistic when it is zoomed. Therefore, cube is divided into small cube with Octree algorithm. That is 'Cube in the Cube' algorithm. For more realistic visualization, we apply 'Semi-transparent Volcanic Ash Plane' which shows the ash as fog. The results are displayed in the 'V-world' which is a spatial information open platform implemented by Korean government. Proposed techniques were adopted in Volcanic Disaster Response System implemented by Korean Ministry of Public Safety and Security.

  16. 3D Visualization Types in Multimedia Applications for Science Learning: A Case Study for 8th Grade Students in Greece

    ERIC Educational Resources Information Center

    Korakakis, G.; Pavlatou, E. A.; Palyvos, J. A.; Spyrellis, N.

    2009-01-01

    This research aims to determine whether the use of specific types of visualization (3D illustration, 3D animation, and interactive 3D animation) combined with narration and text, contributes to the learning process of 13- and 14- years-old students in science courses. The study was carried out with 212 8th grade students in Greece. This…

  17. High resolution renderings and interactive visualization of the 2006 Huntington Beach experiment

    NASA Astrophysics Data System (ADS)

    Im, T.; Nayak, A.; Keen, C.; Samilo, D.; Matthews, J.

    2006-12-01

    The Visualization Center at the Scripps Institution of Oceanography investigates innovative ways to represent graphically interactive 3D virtual landscapes and to produce high resolution, high quality renderings of Earth sciences data and the sensors and instruments used to collect the data . Among the Visualization Center's most recent work is the visualization of the Huntington Beach experiment, a study launched in July 2006 by the Southern California Ocean Observing System (http://www.sccoos.org/) to record and synthesize data of the Huntington Beach coastal region. Researchers and students at the Visualization Center created visual presentations that combine bathymetric data provided by SCCOOS with USGS aerial photography and with 3D polygonal models of sensors created in Maya into an interactive 3D scene using the Fledermaus suite of visualization tools (http://www.ivs3d.com). In addition, the Visualization Center has produced high definition (HD) animations of SCCOOS sensor instruments (e.g. REMUS, drifters, spray glider, nearshore mooring, OCSD/USGS mooring and CDIP mooring) using the Maya modeling and animation software and rendered over multiple nodes of the OptIPuter Visualization Cluster at Scripps. These visualizations are aimed at providing researchers with a broader context of sensor locations relative to geologic characteristics, to promote their use as an educational resource for informal education settings and increasing public awareness, and also as an aid for researchers' proposals and presentations. These visualizations are available for download on the Visualization Center website at http://siovizcenter.ucsd.edu/sccoos/hb2006.php.

  18. Use of cues in virtual reality depends on visual feedback.

    PubMed

    Fulvio, Jacqueline M; Rokers, Bas

    2017-11-22

    3D motion perception is of central importance to daily life. However, when tested in laboratory settings, sensitivity to 3D motion signals is found to be poor, leading to the view that heuristics and prior assumptions are critical for 3D motion perception. Here we explore an alternative: sensitivity to 3D motion signals is context-dependent and must be learned based on explicit visual feedback in novel environments. The need for action-contingent visual feedback is well-established in the developmental literature. For example, young kittens that are passively moved through an environment, but unable to move through it themselves, fail to develop accurate depth perception. We find that these principles also obtain in adult human perception. Observers that do not experience visual consequences of their actions fail to develop accurate 3D motion perception in a virtual reality environment, even after prolonged exposure. By contrast, observers that experience the consequences of their actions improve performance based on available sensory cues to 3D motion. Specifically, we find that observers learn to exploit the small motion parallax cues provided by head jitter. Our findings advance understanding of human 3D motion processing and form a foundation for future study of perception in virtual and natural 3D environments.

  19. Foveated model observers to predict human performance in 3D images

    NASA Astrophysics Data System (ADS)

    Lago, Miguel A.; Abbey, Craig K.; Eckstein, Miguel P.

    2017-03-01

    We evaluate 3D search requires model observers that take into account the peripheral human visual processing (foveated models) to predict human observer performance. We show that two different 3D tasks, free search and location-known detection, influence the relative human visual detectability of two signals of different sizes in synthetic backgrounds mimicking the noise found in 3D digital breast tomosynthesis. One of the signals resembled a microcalcification (a small and bright sphere), while the other one was designed to look like a mass (a larger Gaussian blob). We evaluated current standard models observers (Hotelling; Channelized Hotelling; non-prewhitening matched filter with eye filter, NPWE; and non-prewhitening matched filter model, NPW) and showed that they incorrectly predict the relative detectability of the two signals in 3D search. We propose a new model observer (3D Foveated Channelized Hotelling Observer) that incorporates the properties of the visual system over a large visual field (fovea and periphery). We show that the foveated model observer can accurately predict the rank order of detectability of the signals in 3D images for each task. Together, these results motivate the use of a new generation of foveated model observers for predicting image quality for search tasks in 3D imaging modalities such as digital breast tomosynthesis or computed tomography.

  20. 3D Visualizations of Abstract DataSets

    DTIC Science & Technology

    2010-08-01

    contrasts no shadows, drop shadows and drop lines. 15. SUBJECT TERMS 3D displays, 2.5D displays, abstract network visualizations, depth perception , human...altitude perception in airspace management and airspace route planning—simulated reality visualizations that employ altitude and heading as well as...cues employed by display designers for depicting real-world scenes on a flat surface can be applied to create a perception of depth for abstract

  1. Integrating 3D Visualization and GIS in Planning Education

    ERIC Educational Resources Information Center

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  2. Sub aquatic 3D visualization and temporal analysis utilizing ArcGIS online and 3D applications

    EPA Science Inventory

    We used 3D Visualization tools to illustrate some complex water quality data we’ve been collecting in the Great Lakes. These data include continuous tow data collected from our research vessel the Lake Explorer II, and continuous water quality data collected from an autono...

  3. A web-based 3D geological information visualization system

    NASA Astrophysics Data System (ADS)

    Song, Renbo; Jiang, Nan

    2013-03-01

    Construction of 3D geological visualization system has attracted much more concern in GIS, computer modeling, simulation and visualization fields. It not only can effectively help geological interpretation and analysis work, but also can it can help leveling up geosciences professional education. In this paper, an applet-based method was introduced for developing a web-based 3D geological information visualization system. The main aims of this paper are to explore a rapid and low-cost development method for constructing a web-based 3D geological system. First, the borehole data stored in Excel spreadsheets was extracted and then stored in SQLSERVER database of a web server. Second, the JDBC data access component was utilized for providing the capability of access the database. Third, the user interface was implemented with applet component embedded in JSP page and the 3D viewing and querying functions were implemented with PickCanvas of Java3D. Last, the borehole data acquired from geological survey were used for test the system, and the test results has shown that related methods of this paper have a certain application values.

  4. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    NASA Astrophysics Data System (ADS)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  5. K-t GRAPPA-accelerated 4D flow MRI of liver hemodynamics: influence of different acceleration factors on qualitative and quantitative assessment of blood flow.

    PubMed

    Stankovic, Zoran; Fink, Jury; Collins, Jeremy D; Semaan, Edouard; Russe, Maximilian F; Carr, James C; Markl, Michael; Langer, Mathias; Jung, Bernd

    2015-04-01

    We sought to evaluate the feasibility of k-t parallel imaging for accelerated 4D flow MRI in the hepatic vascular system by investigating the impact of different acceleration factors. k-t GRAPPA accelerated 4D flow MRI of the liver vasculature was evaluated in 16 healthy volunteers at 3T with acceleration factors R = 3, R = 5, and R = 8 (2.0 × 2.5 × 2.4 mm(3), TR = 82 ms), and R = 5 (TR = 41 ms); GRAPPA R = 2 was used as the reference standard. Qualitative flow analysis included grading of 3D streamlines and time-resolved particle traces. Quantitative evaluation assessed velocities, net flow, and wall shear stress (WSS). Significant scan time savings were realized for all acceleration factors compared to standard GRAPPA R = 2 (21-71 %) (p < 0.001). Quantification of velocities and net flow offered similar results between k-t GRAPPA R = 3 and R = 5 compared to standard GRAPPA R = 2. Significantly increased leakage artifacts and noise were seen between standard GRAPPA R = 2 and k-t GRAPPA R = 8 (p < 0.001) with significant underestimation of peak velocities and WSS of up to 31 % in the hepatic arterial system (p <0.05). WSS was significantly underestimated up to 13 % in all vessels of the portal venous system for k-t GRAPPA R = 5, while significantly higher values were observed for the same acceleration with higher temporal resolution in two veins (p < 0.05). k-t acceleration of 4D flow MRI is feasible for liver hemodynamic assessment with acceleration factors R = 3 and R = 5 resulting in a scan time reduction of at least 40 % with similar quantitation of liver hemodynamics compared with GRAPPA R = 2.

  6. 4D ASL-based MR angiography for visualization of distal arteries and leptomeningeal collateral vessels in moyamoya disease: a comparison of techniques.

    PubMed

    Togao, Osamu; Hiwatashi, Akio; Obara, Makoto; Yamashita, Koji; Momosaka, Daichi; Nishimura, Ataru; Arimura, Koichi; Hata, Nobuhiro; Yoshimoto, Koji; Iihara, Koji; Van Cauteren, Marc; Honda, Hiroshi

    2018-05-08

    To evaluate the performance of four-dimensional pseudo-continuous arterial spin labeling (4D-pCASL)-based angiography using CENTRA-keyhole and view sharing (4D-PACK) in the visualization of flow dynamics in distal cerebral arteries and leptomeningeal anastomosis (LMA) collaterals in moyamoya disease in comparison with contrast inherent inflow-enhanced multiphase angiography (CINEMA), with reference to digital subtraction angiography (DSA). Thirty-two cerebral hemispheres from 19 patients with moyamoya disease (mean age, 29.7 ± 19.6 years; five males, 14 females) underwent both 4D-MR angiography and DSA. Qualitative evaluations included the visualization of anterograde middle cerebral artery (MCA) flow and retrograde flow via LMA collaterals with reference to DSA. Quantitative evaluations included assessments of the contrast-to-noise ratio (CNR) on these vessels. The linear mixed-effect model was used to compare the 4D-PACK and CINEMA methods. The vessel visualization scores were significantly higher with 4D-PACK than with CINEMA in the visualization of anterograde flow for both Observer 1 (CINEMA, 3.53 ± 1.39; 4D-PACK, 4.53 ± 0.80; p < 0.0001) and Observer 2 (CINEMA, 3.50±1.39; 4D-PACK, 4.31 ± 0.86; p = 0.0009). The scores were higher with 4D-PACK than with CINEMA in the visualization of retrograde flow for both Observer 1 (CINEMA, 3.44 ± 1.05; 4D-PACK, 4.47 ± 0.88; p < 0.0001) and Observer 2 (CINEMA, 3.19 ± 1.20; 4D-PACK, 4.38 ± 0.91; p < 0.0001). The maximum CNR in the anterograde flow was higher in 4D-PACK (40.1 ± 16.1, p = 0.0001) than in CINEMA (27.0 ± 16.6). The maximum CNR in the retrograde flow was higher in 4D-PACK (36.1 ± 10.0, p < 0.0001) than in CINEMA (15.4 ± 8.0). The 4D-PACK provided better visualization and higher CNRs in distal cerebral arteries and LMA collaterals compared with CINEMA in patients with this disease. • The 4D-PACK enables good visualization of distal cerebral arteries in moyamoya disease. • The 4D-PACK enables direct visualization of leptomeningeal collateral vessels in moyamoya disease. • Vessel visualization by 4D-PACK can be useful in assessing cerebral hemodynamics.

  7. Solid object visualization of 3D ultrasound data

    NASA Astrophysics Data System (ADS)

    Nelson, Thomas R.; Bailey, Michael J.

    2000-04-01

    Visualization of volumetric medical data is challenging. Rapid-prototyping (RP) equipment producing solid object prototype models of computer generated structures is directly applicable to visualization of medical anatomic data. The purpose of this study was to develop methods for transferring 3D Ultrasound (3DUS) data to RP equipment for visualization of patient anatomy. 3DUS data were acquired using research and clinical scanning systems. Scaling information was preserved and the data were segmented using threshold and local operators to extract features of interest, converted from voxel raster coordinate format to a set of polygons representing an iso-surface and transferred to the RP machine to create a solid 3D object. Fabrication required 30 to 60 minutes depending on object size and complexity. After creation the model could be touched and viewed. A '3D visualization hardcopy device' has advantages for conveying spatial relations compared to visualization using computer display systems. The hardcopy model may be used for teaching or therapy planning. Objects may be produced at the exact dimension of the original object or scaled up (or down) to facilitate matching the viewers reference frame more optimally. RP models represent a useful means of communicating important information in a tangible fashion to patients and physicians.

  8. Changing the Learning Curve in Novice Laparoscopists: Incorporating Direct Visualization into the Simulation Training Program.

    PubMed

    Dawidek, Mark T; Roach, Victoria A; Ott, Michael C; Wilson, Timothy D

    A major challenge in laparoscopic surgery is the lack of depth perception. With the development and continued improvement of 3D video technology, the potential benefit of restoring 3D vision to laparoscopy has received substantial attention from the surgical community. Despite this, procedures conducted under 2D vision remain the standard of care, and trainees must become proficient in 2D laparoscopy. This study aims to determine whether incorporating 3D vision into a 2D laparoscopic simulation curriculum accelerates skill acquisition in novices. Postgraduate year-1 surgical specialty residents (n = 15) at the Schulich School of Medicine and Dentistry, at Western University were randomized into 1 of 2 groups. The control group practiced the Fundamentals of Laparoscopic Surgery peg-transfer task to proficiency exclusively under standard 2D laparoscopy conditions. The experimental group first practiced peg transfer under 3D direct visualization, with direct visualization of the working field. Upon reaching proficiency, this group underwent a perceptual switch, changing to standard 2D laparoscopy conditions, and once again trained to proficiency. Incorporating 3D direct visualization before training under standard 2D conditions significantly (p < 0.0.5) reduced the total training time to proficiency by 10.9 minutes or 32.4%. There was no difference in total number of repetitions to proficiency. Data were also used to generate learning curves for each respective training protocol. An adaptive learning approach, which incorporates 3D direct visualization into a 2D laparoscopic simulation curriculum, accelerates skill acquisition. This is in contrast to previous work, possibly owing to the proficiency-based methodology employed, and has implications for resource savings in surgical training. Crown Copyright © 2016. Published by Elsevier Inc. All rights reserved.

  9. The performance & flow visualization studies of three-dimensional (3-D) wind turbine blade models

    NASA Astrophysics Data System (ADS)

    Sutrisno, Prajitno, Purnomo, W., Setyawan B.

    2016-06-01

    Recently, studies on the design of 3-D wind turbine blades have a less attention even though 3-D blade products are widely sold. In contrary, advanced studies in 3-D helicopter blade tip have been studied rigorously. Studies in wind turbine blade modeling are mostly assumed that blade spanwise sections behave as independent two-dimensional airfoils, implying that there is no exchange of momentum in the spanwise direction. Moreover, flow visualization experiments are infrequently conducted. Therefore, a modeling study of wind turbine blade with visualization experiment is needed to be improved to obtain a better understanding. The purpose of this study is to investigate the performance of 3-D wind turbine blade models with backward-forward swept and verify the flow patterns using flow visualization. In this research, the blade models are constructed based on the twist and chord distributions following Schmitz's formula. Forward and backward swept are added to the rotating blades. Based on this, the additional swept would enhance or diminish outward flow disturbance or stall development propagation on the spanwise blade surfaces to give better blade design. Some combinations, i. e., b lades with backward swept, provide a better 3-D favorable rotational force of the rotor system. The performance of the 3-D wind turbine system model is measured by a torque meter, employing Prony's braking system. Furthermore, the 3-D flow patterns around the rotating blade models are investigated by applying "tuft-visualization technique", to study the appearance of laminar, separated, and boundary layer flow patterns surrounding the 3-dimentional blade system.

  10. How Spatial Abilities and Dynamic Visualizations Interplay When Learning Functional Anatomy with 3D Anatomical Models

    ERIC Educational Resources Information Center

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material…

  11. Affective three-dimensional brain-computer interface created using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-12-01

    To avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we applied a prism array-based display when presenting three-dimensional (3-D) objects. Emotional pictures were used as visual stimuli to increase the signal-to-noise ratios of steady-state visually evoked potentials (SSVEPs) because involuntarily motivated selective attention by affective mechanisms can enhance SSVEP amplitudes, thus producing increased interaction efficiency. Ten male and nine female participants voluntarily participated in our experiments. Participants were asked to control objects under three viewing conditions: two-dimension (2-D), stereoscopic 3-D, and prism. The participants performed each condition in a counter-balanced order. One-way repeated measures analysis of variance showed significant increases in the positive predictive values in the prism condition compared to the 2-D and 3-D conditions. Participants' subjective ratings of realness and engagement were also significantly greater in the prism condition than in the 2-D and 3-D conditions, while the ratings for visual fatigue were significantly reduced in the prism condition than in the 3-D condition. The proposed methods are expected to enhance the sense of reality in 3-D space without causing critical visual fatigue. In addition, people who are especially susceptible to stereoscopic 3-D may be able to use the affective brain-computer interface.

  12. Learning-based saliency model with depth information.

    PubMed

    Ma, Chih-Yao; Hang, Hsueh-Ming

    2015-01-01

    Most previous studies on visual saliency focused on two-dimensional (2D) scenes. Due to the rapidly growing three-dimensional (3D) video applications, it is very desirable to know how depth information affects human visual attention. In this study, we first conducted eye-fixation experiments on 3D images. Our fixation data set comprises 475 3D images and 16 subjects. We used a Tobii TX300 eye tracker (Tobii, Stockholm, Sweden) to track the eye movement of each subject. In addition, this database contains 475 computed depth maps. Due to the scarcity of public-domain 3D fixation data, this data set should be useful to the 3D visual attention research community. Then, a learning-based visual attention model was designed to predict human attention. In addition to the popular 2D features, we included the depth map and its derived features. The results indicate that the extra depth information can enhance the saliency estimation accuracy specifically for close-up objects hidden in a complex-texture background. In addition, we examined the effectiveness of various low-, mid-, and high-level features on saliency prediction. Compared with both 2D and 3D state-of-the-art saliency estimation models, our methods show better performance on the 3D test images. The eye-tracking database and the MATLAB source codes for the proposed saliency model and evaluation methods are available on our website.

  13. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Data Analysis and Visualization; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,'' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii)more » evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.« less

  14. Incorporating 3-dimensional models in online articles.

    PubMed

    Cevidanes, Lucia H S; Ruellas, Antonio C O; Jomier, Julien; Nguyen, Tung; Pieper, Steve; Budin, Francois; Styner, Martin; Paniagua, Beatriz

    2015-05-01

    The aims of this article are to introduce the capability to view and interact with 3-dimensional (3D) surface models in online publications, and to describe how to prepare surface models for such online 3D visualizations. Three-dimensional image analysis methods include image acquisition, construction of surface models, registration in a common coordinate system, visualization of overlays, and quantification of changes. Cone-beam computed tomography scans were acquired as volumetric images that can be visualized as 3D projected images or used to construct polygonal meshes or surfaces of specific anatomic structures of interest. The anatomic structures of interest in the scans can be labeled with color (3D volumetric label maps), and then the scans are registered in a common coordinate system using a target region as the reference. The registered 3D volumetric label maps can be saved in .obj, .ply, .stl, or .vtk file formats and used for overlays, quantification of differences in each of the 3 planes of space, or color-coded graphic displays of 3D surface distances. All registered 3D surface models in this study were saved in .vtk file format and loaded in the Elsevier 3D viewer. In this study, we describe possible ways to visualize the surface models constructed from cone-beam computed tomography images using 2D and 3D figures. The 3D surface models are available in the article's online version for viewing and downloading using the reader's software of choice. These 3D graphic displays are represented in the print version as 2D snapshots. Overlays and color-coded distance maps can be displayed using the reader's software of choice, allowing graphic assessment of the location and direction of changes or morphologic differences relative to the structure of reference. The interpretation of 3D overlays and quantitative color-coded maps requires basic knowledge of 3D image analysis. When submitting manuscripts, authors can now upload 3D models that will allow readers to interact with or download them. Such interaction with 3D models in online articles now will give readers and authors better understanding and visualization of the results. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  15. Exploratory Climate Data Visualization and Analysis Using DV3D and UVCDAT

    NASA Technical Reports Server (NTRS)

    Maxwell, Thomas

    2012-01-01

    Earth system scientists are being inundated by an explosion of data generated by ever-increasing resolution in both global models and remote sensors. Advanced tools for accessing, analyzing, and visualizing very large and complex climate data are required to maintain rapid progress in Earth system research. To meet this need, NASA, in collaboration with the Ultra-scale Visualization Climate Data Analysis Tools (UVCOAT) consortium, is developing exploratory climate data analysis and visualization tools which provide data analysis capabilities for the Earth System Grid (ESG). This paper describes DV3D, a UV-COAT package that enables exploratory analysis of climate simulation and observation datasets. OV3D provides user-friendly interfaces for visualization and analysis of climate data at a level appropriate for scientists. It features workflow inte rfaces, interactive 40 data exploration, hyperwall and stereo visualization, automated provenance generation, and parallel task execution. DV30's integration with CDAT's climate data management system (COMS) and other climate data analysis tools provides a wide range of high performance climate data analysis operations. DV3D expands the scientists' toolbox by incorporating a suite of rich new exploratory visualization and analysis methods for addressing the complexity of climate datasets.

  16. 4D phase contrast flow imaging for in-stent flow visualization and assessment of stent patency in peripheral vascular stents--a phantom study.

    PubMed

    Bunck, Alexander C; Jüttner, Alena; Kröger, Jan Robert; Burg, Matthias C; Kugel, Harald; Niederstadt, Thomas; Tiemann, Klaus; Schnackenburg, Bernhard; Crelier, Gerard R; Heindel, Walter; Maintz, David

    2012-09-01

    4D phase contrast flow imaging is increasingly used to study the hemodynamics in various vascular territories and pathologies. The aim of this study was to assess the feasibility and validity of MRI based 4D phase contrast flow imaging for the evaluation of in-stent blood flow in 17 commonly used peripheral stents. 17 different peripheral stents were implanted into a MR compatible flow phantom. In-stent visibility, maximal velocity and flow visualization were assessed and estimates of in-stent patency obtained from 4D phase contrast flow data sets were compared to a conventional 3D contrast-enhanced magnetic resonance angiography (CE-MRA) as well as 2D PC flow measurements. In all but 3 of the tested stents time-resolved 3D particle traces could be visualized inside the stent lumen. Quality of 4D flow visualization and CE-MRA images depended on stent type and stent orientation relative to the magnetic field. Compared to the visible lumen area determined by 3D CE-MRA, estimates of lumen patency derived from 4D flow measurements were significantly higher and less dependent on stent type. A higher number of stents could be assessed for in-stent patency by 4D phase contrast flow imaging (n=14) than by 2D phase contrast flow imaging (n=10). 4D phase contrast flow imaging in peripheral vascular stents is feasible and appears advantageous over conventional 3D contrast-enhanced MR angiography and 2D phase contrast flow imaging. It allows for in-stent flow visualization and flow quantification with varying quality depending on stent type. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  17. Ray-based approach to integrated 3D visual communication

    NASA Astrophysics Data System (ADS)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  18. The TINS Lecture. The parietal association cortex in depth perception and visual control of hand action.

    PubMed

    Sakata, H; Taira, M; Kusunoki, M; Murata, A; Tanaka, Y

    1997-08-01

    Recent neurophysiological studies in alert monkeys have revealed that the parietal association cortex plays a crucial role in depth perception and visually guided hand movement. The following five classes of parietal neurons covering various aspects of these functions have been identified: (1) depth-selective visual-fixation (VF) neurons of the inferior parietal lobule (IPL), representing egocentric distance; (2) depth-movement sensitive (DMS) neurons of V5A and the ventral intraparietal (VIP) area representing direction of linear movement in 3-D space; (3) depth-rotation-sensitive (RS) neurons of V5A and the posterior parietal (PP) area representing direction of rotary movement in space; (4) visually responsive manipulation-related neurons (visual-dominant or visual-and-motor type) of the anterior intraparietal (AIP) area, representing 3-D shape or orientation (or both) of objects for manipulation; and (5) axis-orientation-selective (AOS) and surface-orientation-selective (SOS) neurons in the caudal intraparietal sulcus (cIPS) sensitive to binocular disparity and representing the 3-D orientation of the longitudinal axes and flat surfaces, respectively. Some AOS and SOS neurons are selective in both orientation and shape. Thus the dorsal visual pathway is divided into at least two subsystems, V5A, PP and VIP areas for motion vision and V6, LIP and cIPS areas for coding position and 3-D features. The cIPS sends the signals of 3-D features of objects to the AIP area, which is reciprocally connected to the ventral premotor (F5) area and plays an essential role in matching hand orientation and shaping with 3-D objects for manipulation.

  19. Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.

    PubMed

    Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn

    2016-12-21

    The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybrid-dimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies – Three.js, D3.js and PHP – as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.

  20. Web-based hybrid-dimensional Visualization and Exploration of Cytological Localization Scenarios.

    PubMed

    Kovanci, Gökhan; Ghaffar, Mehmood; Sommer, Björn

    2016-10-01

    The CELLmicrocosmos 4.2 PathwayIntegration (CmPI) is a tool which provides hybriddimensional visualization and analysis of intracellular protein and gene localizations in the context of a virtual 3D environment. This tool is developed based on Java/Java3D/JOGL and provides a standalone application compatible to all relevant operating systems. However, it requires Java and the local installation of the software. Here we present the prototype of an alternative web-based visualization approach, using Three.js and D3.js. In this way it is possible to visualize and explore CmPI-generated localization scenarios including networks mapped to 3D cell components by just providing a URL to a collaboration partner. This publication describes the integration of the different technologies - Three.js, D3.js and PHP - as well as an application case: a localization scenario of the citrate cycle. The CmPI web viewer is available at: http://CmPIweb.CELLmicrocosmos.org.

  1. Real-time dose calculation and visualization for the proton therapy of ocular tumours

    NASA Astrophysics Data System (ADS)

    Pfeiffer, Karsten; Bendl, Rolf

    2001-03-01

    A new real-time dose calculation and visualization was developed as part of the new 3D treatment planning tool OCTOPUS for proton therapy of ocular tumours within a national research project together with the Hahn-Meitner Institut Berlin. The implementation resolves the common separation between parameter definition, dose calculation and evaluation and allows a direct examination of the expected dose distribution while adjusting the treatment parameters. The new tool allows the therapist to move the desired dose distribution under visual control in 3D to the appropriate place. The visualization of the resulting dose distribution as a 3D surface model, on any 2D slice or on the surface of specified ocular structures is done automatically when adapting parameters during the planning process. In addition, approximate dose volume histograms may be calculated with little extra time. The dose distribution is calculated and visualized in 200 ms with an accuracy of 6% for the 3D isodose surfaces and 8% for other objects. This paper discusses the advantages and limitations of this new approach.

  2. Affective SSVEP BCI to effectively control 3D objects by using a prism array-based display

    NASA Astrophysics Data System (ADS)

    Mun, Sungchul; Park, Min-Chul

    2014-06-01

    3D objects with depth information can provide many benefits to users in education, surgery, and interactions. In particular, many studies have been done to enhance sense of reality in 3D interaction. Viewing and controlling stereoscopic 3D objects with crossed or uncrossed disparities, however, can cause visual fatigue due to the vergenceaccommodation conflict generally accepted in 3D research fields. In order to avoid the vergence-accommodation mismatch and provide a strong sense of presence to users, we apply a prism array-based display to presenting 3D objects. Emotional pictures were used as visual stimuli in control panels to increase information transfer rate and reduce false positives in controlling 3D objects. Involuntarily motivated selective attention by affective mechanism can enhance steady-state visually evoked potential (SSVEP) amplitude and lead to increased interaction efficiency. More attentional resources are allocated to affective pictures with high valence and arousal levels than to normal visual stimuli such as white-and-black oscillating squares and checkerboards. Among representative BCI control components (i.e., eventrelated potentials (ERP), event-related (de)synchronization (ERD/ERS), and SSVEP), SSVEP-based BCI was chosen in the following reasons. It shows high information transfer rates and takes a few minutes for users to control BCI system while few electrodes are required for obtaining reliable brainwave signals enough to capture users' intention. The proposed BCI methods are expected to enhance sense of reality in 3D space without causing critical visual fatigue to occur. In addition, people who are very susceptible to (auto) stereoscopic 3D may be able to use the affective BCI.

  3. "We Put on the Glasses and Moon Comes Closer!" Urban Second Graders Exploring the Earth, the Sun and Moon through 3D Technologies in a Science and Literacy Unit

    ERIC Educational Resources Information Center

    Isik-Ercan, Zeynep; Zeynep Inan, Hatice; Nowak, Jeffrey A.; Kim, Beomjin

    2014-01-01

    This qualitative case study describes (a) the ways 3D visualization, coupled with other science and literacy experiences, supported young children's first exploration of the Earth-Sun-Moon system and (b) the perspectives of classroom teachers and children on using 3D visualization. We created three interactive 3D software modules that simulate day…

  4. DspaceOgreTerrain 3D Terrain Visualization Tool

    NASA Technical Reports Server (NTRS)

    Myint, Steven; Jain, Abhinandan; Pomerantz, Marc I.

    2012-01-01

    DspaceOgreTerrain is an extension to the DspaceOgre 3D visualization tool that supports real-time visualization of various terrain types, including digital elevation maps, planets, and meshes. DspaceOgreTerrain supports creating 3D representations of terrains and placing them in a scene graph. The 3D representations allow for a continuous level of detail, GPU-based rendering, and overlaying graphics like wheel tracks and shadows. It supports reading data from the SimScape terrain- modeling library. DspaceOgreTerrain solves the problem of displaying the results of simulations that involve very large terrains. In the past, it has been used to visualize simulations of vehicle traverses on Lunar and Martian terrains. These terrains were made up of billions of vertices and would not have been renderable in real-time without using a continuous level of detail rendering technique.

  5. Creating 3D visualizations of MRI data: A brief guide.

    PubMed

    Madan, Christopher R

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D 'glass brain' rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study's findings.

  6. Creating 3D visualizations of MRI data: A brief guide

    PubMed Central

    Madan, Christopher R.

    2015-01-01

    While magnetic resonance imaging (MRI) data is itself 3D, it is often difficult to adequately present the results papers and slides in 3D. As a result, findings of MRI studies are often presented in 2D instead. A solution is to create figures that include perspective and can convey 3D information; such figures can sometimes be produced by standard functional magnetic resonance imaging (fMRI) analysis packages and related specialty programs. However, many options cannot provide functionality such as visualizing activation clusters that are both cortical and subcortical (i.e., a 3D glass brain), the production of several statistical maps with an identical perspective in the 3D rendering, or animated renderings. Here I detail an approach for creating 3D visualizations of MRI data that satisfies all of these criteria. Though a 3D ‘glass brain’ rendering can sometimes be difficult to interpret, they are useful in showing a more overall representation of the results, whereas the traditional slices show a more local view. Combined, presenting both 2D and 3D representations of MR images can provide a more comprehensive view of the study’s findings. PMID:26594340

  7. Volumetric 3D display using a DLP projection engine

    NASA Astrophysics Data System (ADS)

    Geng, Jason

    2012-03-01

    In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.

  8. Heavy Ion and Proton-Induced Single Event Upset Characteristics of a 3D NAND Flash Memory

    NASA Technical Reports Server (NTRS)

    Chen, Dakai; Wilcox, Edward; Ladbury, Raymond; Seidleck, Christina; Kim, Hak; Phan, Anthony; Label, Kenneth

    2017-01-01

    We evaluated the effects of heavy ion and proton irradiation for a 3D NAND flash. The 3D NAND showed similar single-event upset (SEU) sensitivity to a planar NAND of identical density in the multiple-cell level (MLC) storage mode. The 3D NAND showed significantly reduced SEU susceptibility in single-level-cell (SLC) storage mode. Additionally, the 3D NAND showed less multiple-bit upset susceptibility than the planar NAND, with fewer number of upset bits per byte and smaller cross sections overall. However, the 3D architecture exhibited angular sensitivities for both base and face angles, reflecting the anisotropic nature of the SEU vulnerability in space. Furthermore, the SEU cross section decreased with increasing fluence for both the 3D NAND and the Micron 16 nm planar NAND, which suggests that typical heavy ion test fluences will underestimate the upset rate during a space mission. These unique characteristics introduce complexity to traditional ground irradiation test procedures.

  9. When the display matters: A multifaceted perspective on 3D geovisualizations

    NASA Astrophysics Data System (ADS)

    Juřík, Vojtěch; Herman, Lukáš; Šašinka, Čeněk; Stachoň, Zdeněk; Chmelík, Jiří

    2017-04-01

    This study explores the influence of stereoscopic (real) 3D and monoscopic (pseudo) 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant's motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision). The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  10. Volume-rendering on a 3D hyperwall: A molecular visualization platform for research, education and outreach.

    PubMed

    MacDougall, Preston J; Henze, Christopher E; Volkov, Anatoliy

    2016-11-01

    We present a unique platform for molecular visualization and design that uses novel subatomic feature detection software in tandem with 3D hyperwall visualization technology. We demonstrate the fleshing-out of pharmacophores in drug molecules, as well as reactive sites in catalysts, focusing on subatomic features. Topological analysis with picometer resolution, in conjunction with interactive volume-rendering of the Laplacian of the electronic charge density, leads to new insight into docking and catalysis. Visual data-mining is done efficiently and in parallel using a 4×4 3D hyperwall (a tiled array of 3D monitors driven independently by slave GPUs but displaying high-resolution, synchronized and functionally-related images). The visual texture of images for a wide variety of molecular systems are intuitive to experienced chemists but also appealing to neophytes, making the platform simultaneously useful as a tool for advanced research as well as for pedagogical and STEM education outreach purposes. Copyright © 2016. Published by Elsevier Inc.

  11. Immersive Visual Analytics for Transformative Neutron Scattering Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steed, Chad A; Daniel, Jamison R; Drouhard, Margaret

    The ORNL Spallation Neutron Source (SNS) provides the most intense pulsed neutron beams in the world for scientific research and development across a broad range of disciplines. SNS experiments produce large volumes of complex data that are analyzed by scientists with varying degrees of experience using 3D visualization and analysis systems. However, it is notoriously difficult to achieve proficiency with 3D visualizations. Because 3D representations are key to understanding the neutron scattering data, scientists are unable to analyze their data in a timely fashion resulting in inefficient use of the limited and expensive SNS beam time. We believe a moremore » intuitive interface for exploring neutron scattering data can be created by combining immersive virtual reality technology with high performance data analytics and human interaction. In this paper, we present our initial investigations of immersive visualization concepts as well as our vision for an immersive visual analytics framework that could lower the barriers to 3D exploratory data analysis of neutron scattering data at the SNS.« less

  12. Three-dimensional Visualization of Ultrasound Backscatter Statistics by Window-modulated Compounding Nakagami Imaging.

    PubMed

    Zhou, Zhuhuang; Wu, Shuicai; Lin, Man-Yen; Fang, Jui; Liu, Hao-Li; Tsui, Po-Hsiang

    2018-05-01

    In this study, the window-modulated compounding (WMC) technique was integrated into three-dimensional (3D) ultrasound Nakagami imaging for improving the spatial visualization of backscatter statistics. A 3D WMC Nakagami image was produced by summing and averaging a number of 3D Nakagami images (number of frames denoted as N) formed using sliding cubes with varying side lengths ranging from 1 to N times the transducer pulse. To evaluate the performance of the proposed 3D WMC Nakagami imaging method, agar phantoms with scatterer concentrations ranging from 2 to 64 scatterers/mm 3 were made, and six stages of fatty liver (zero, one, two, four, six, and eight weeks) were induced in rats by methionine-choline-deficient diets (three rats for each stage, total n = 18). A mechanical scanning system with a 5-MHz focused single-element transducer was used for ultrasound radiofrequency data acquisition. The experimental results showed that 3D WMC Nakagami imaging was able to characterize different scatterer concentrations. Backscatter statistics were visualized with various numbers of frames; N = 5 reduced the estimation error of 3D WMC Nakagami imaging in visualizing the backscatter statistics. Compared with conventional 3D Nakagami imaging, 3D WMC Nakagami imaging improved the image smoothness without significant image resolution degradation, and it can thus be used for describing different stages of fatty liver in rats.

  13. Partially converted stereoscopic images and the effects on visual attention and memory

    NASA Astrophysics Data System (ADS)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.

  14. Clinical evaluation of accommodation and ocular surface stability relavant to visual asthenopia with 3D displays

    PubMed Central

    2014-01-01

    Background To validate the association between accommodation and visual asthenopia by measuring objective accommodative amplitude with the Optical Quality Analysis System (OQAS®, Visiometrics, Terrassa, Spain), and to investigate associations among accommodation, ocular surface instability, and visual asthenopia while viewing 3D displays. Methods Fifteen normal adults without any ocular disease or surgical history watched the same 3D and 2D displays for 30 minutes. Accommodative ability, ocular protection index (OPI), and total ocular symptom scores were evaluated before and after viewing the 3D and 2D displays. Accommodative ability was evaluated by the near point of accommodation (NPA) and OQAS to ensure reliability. The OPI was calculated by dividing the tear breakup time (TBUT) by the interblink interval (IBI). The changes in accommodative ability, OPI, and total ocular symptom scores after viewing 3D and 2D displays were evaluated. Results Accommodative ability evaluated by NPA and OQAS, OPI, and total ocular symptom scores changed significantly after 3D viewing (p = 0.005, 0.003, 0.006, and 0.003, respectively), but yielded no difference after 2D viewing. The objective measurement by OQAS verified the decrease of accommodative ability while viewing 3D displays. The change of NPA, OPI, and total ocular symptom scores after 3D viewing had a significant correlation (p < 0.05), implying direct associations among these factors. Conclusions The decrease of accommodative ability after 3D viewing was validated by both subjective and objective methods in our study. Further, the deterioration of accommodative ability and ocular surface stability may be causative factors of visual asthenopia in individuals viewing 3D displays. PMID:24612686

  15. Discovering hidden relationships between renal diseases and regulated genes through 3D network visualizations

    PubMed Central

    2010-01-01

    Background In a recent study, two-dimensional (2D) network layouts were used to visualize and quantitatively analyze the relationship between chronic renal diseases and regulated genes. The results revealed complex relationships between disease type, gene specificity, and gene regulation type, which led to important insights about the underlying biological pathways. Here we describe an attempt to extend our understanding of these complex relationships by reanalyzing the data using three-dimensional (3D) network layouts, displayed through 2D and 3D viewing methods. Findings The 3D network layout (displayed through the 3D viewing method) revealed that genes implicated in many diseases (non-specific genes) tended to be predominantly down-regulated, whereas genes regulated in a few diseases (disease-specific genes) tended to be up-regulated. This new global relationship was quantitatively validated through comparison to 1000 random permutations of networks of the same size and distribution. Our new finding appeared to be the result of using specific features of the 3D viewing method to analyze the 3D renal network. Conclusions The global relationship between gene regulation and gene specificity is the first clue from human studies that there exist common mechanisms across several renal diseases, which suggest hypotheses for the underlying mechanisms. Furthermore, the study suggests hypotheses for why the 3D visualization helped to make salient a new regularity that was difficult to detect in 2D. Future research that tests these hypotheses should enable a more systematic understanding of when and how to use 3D network visualizations to reveal complex regularities in biological networks. PMID:21070623

  16. Potential dosimetric benefit of dose-warping based 4D planning compared to conventional 3D planning in liver stereotactic body radiotherapy (SBRT)

    NASA Astrophysics Data System (ADS)

    Yeo, U. J.; Taylor, M. L.; Kron, T.; Pham, D.; Siva, S.; Franich, R. D.

    2013-06-01

    Respiratory motion induces dosimetric uncertainties for thoracic and abdominal cancer radiotherapy (RT) due to deforming and moving anatomy. This study investigates the extent of dosimetric differences between conventional 3D treatment planning and path-integrated 4D treatment planning in liver stereotactic body radiotherapy (SBRT). Respiratory-correlated 4DCT image sets with 10 phases were acquired for patients with liver tumours. Path-integrated 4D dose accumulation was performed using dose-warping techniques based on deformable image registration. Dose-volume histogram analysis demonstrated that the 3D planning approach overestimated doses to targets by up to 24% and underestimated dose to normal liver by ~4.5%, compared to the 4D planning methodology. Therefore, 4D planning has the potential to quantify such issues of under- and/or over-dosage and improve treatment accuracy.

  17. CAIPIRINHA accelerated SPACE enables 10-min isotropic 3D TSE MRI of the ankle for optimized visualization of curved and oblique ligaments and tendons.

    PubMed

    Kalia, Vivek; Fritz, Benjamin; Johnson, Rory; Gilson, Wesley D; Raithel, Esther; Fritz, Jan

    2017-09-01

    To test the hypothesis that a fourfold CAIPIRINHA accelerated, 10-min, high-resolution, isotropic 3D TSE MRI prototype protocol of the ankle derives equal or better quality than a 20-min 2D TSE standard protocol. Following internal review board approval and informed consent, 3-Tesla MRI of the ankle was obtained in 24 asymptomatic subjects including 10-min 3D CAIPIRINHA SPACE TSE prototype and 20-min 2D TSE standard protocols. Outcome variables included image quality and visibility of anatomical structures using 5-point Likert scales. Non-parametric statistical testing was used. P values ≤0.001 were considered significant. Edge sharpness, contrast resolution, uniformity, noise, fat suppression and magic angle effects were without statistical difference on 2D and 3D TSE images (p > 0.035). Fluid was mildly brighter on intermediate-weighted 2D images (p < 0.001), whereas 3D images had substantially less partial volume, chemical shift and no pulsatile-flow artifacts (p < 0.001). Oblique and curved planar 3D images resulted in mildly-to-substantially improved visualization of joints, spring, bifurcate, syndesmotic, collateral and sinus tarsi ligaments, and tendons (p < 0.001, respectively). 3D TSE MRI with CAIPIRINHA acceleration enables high-spatial resolution oblique and curved planar MRI of the ankle and visualization of ligaments, tendons and joints equally well or better than a more time-consuming anisotropic 2D TSE MRI. • High-resolution 3D TSE MRI improves visualization of ankle structures. • Limitations of current 3D TSE MRI include long scan times. • 3D CAIPIRINHA SPACE allows now a fourfold-accelerated data acquisition. • 3D CAIPIRINHA SPACE enables high-spatial-resolution ankle MRI within 10 min. • 10-min 3D CAIPIRINHA SPACE produces equal-or-better quality than 20-min 2D TSE.

  18. Influence of Gsd for 3d City Modeling and Visualization from Aerial Imagery

    NASA Astrophysics Data System (ADS)

    Alrajhi, Muhamad; Alam, Zafare; Afroz Khan, Mohammad; Alobeid, Abdalla

    2016-06-01

    Ministry of Municipal and Rural Affairs (MOMRA), aims to establish solid infrastructure required for 3D city modelling, for decision making to set a mark in urban development. MOMRA is responsible for the large scale mapping 1:1,000; 1:2,500; 1:10,000 and 1:20,000 scales for 10cm, 20cm and 40 GSD with Aerial Triangulation data. As 3D city models are increasingly used for the presentation exploration, and evaluation of urban and architectural designs. Visualization capabilities and animations support of upcoming 3D geo-information technologies empower architects, urban planners, and authorities to visualize and analyze urban and architectural designs in the context of the existing situation. To make use of this possibility, first of all 3D city model has to be created for which MOMRA uses the Aerial Triangulation data and aerial imagery. The main concise for 3D city modelling in the Kingdom of Saudi Arabia exists due to uneven surface and undulations. Thus real time 3D visualization and interactive exploration support planning processes by providing multiple stakeholders such as decision maker, architects, urban planners, authorities, citizens or investors with a three - dimensional model. Apart from advanced visualization, these 3D city models can be helpful for dealing with natural hazards and provide various possibilities to deal with exotic conditions by better and advanced viewing technological infrastructure. Riyadh on one side is 5700m above sea level and on the other hand Abha city is 2300m, this uneven terrain represents a drastic change of surface in the Kingdom, for which 3D city models provide valuable solutions with all possible opportunities. In this research paper: influence of different GSD (Ground Sample Distance) aerial imagery with Aerial Triangulation is used for 3D visualization in different region of the Kingdom, to check which scale is more sophisticated for obtaining better results and is cost manageable, with GSD (7.5cm, 10cm, 20cm and 40cm). The comparison test is carried out in Bentley environment to check the best possible results obtained through operating different batch processes.

  19. 3-D interactive visualisation tools for Hi spectral line imaging

    NASA Astrophysics Data System (ADS)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2017-06-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is often complex in nature. Here we present SlicerAstro, an open-source extension of 3DSlicer, a multi-platform open source software package for visualization and medical image processing, which we developed for the inspection and analysis of HI spectral line data. We describe its initial capabilities, including 3-D filtering, 3-D selection and comparative modelling.

  20. Spatial Visualization by Realistic 3D Views

    ERIC Educational Resources Information Center

    Yue, Jianping

    2008-01-01

    In this study, the popular Purdue Spatial Visualization Test-Visualization by Rotations (PSVT-R) in isometric drawings was recreated with CAD software that allows 3D solid modeling and rendering to provide more realistic pictorial views. Both the original and the modified PSVT-R tests were given to students and their scores on the two tests were…

  1. Spatial Reasoning with External Visualizations: What Matters Is What You See, Not whether You Interact

    ERIC Educational Resources Information Center

    Keehner, Madeleine; Hegarty, Mary; Cohen, Cheryl; Khooshabeh, Peter; Montello, Daniel R.

    2008-01-01

    Three experiments examined the effects of interactive visualizations and spatial abilities on a task requiring participants to infer and draw cross sections of a three-dimensional (3D) object. The experiments manipulated whether participants could interactively control a virtual 3D visualization of the object while performing the task, and…

  2. 3D surface reconstruction and visualization of the Drosophila wing imaginal disc at cellular resolution

    NASA Astrophysics Data System (ADS)

    Bai, Linge; Widmann, Thomas; Jülicher, Frank; Dahmann, Christian; Breen, David

    2013-01-01

    Quantifying and visualizing the shape of developing biological tissues provide information about the morphogenetic processes in multicellular organisms. The size and shape of biological tissues depend on the number, size, shape, and arrangement of the constituting cells. To better understand the mechanisms that guide tissues into their final shape, it is important to investigate the cellular arrangement within tissues. Here we present a data processing pipeline to generate 3D volumetric surface models of epithelial tissues, as well as geometric descriptions of the tissues' apical cell cross-sections. The data processing pipeline includes image acquisition, editing, processing and analysis, 2D cell mesh generation, 3D contourbased surface reconstruction, cell mesh projection, followed by geometric calculations and color-based visualization of morphological parameters. In their first utilization we have applied these procedures to construct a 3D volumetric surface model at cellular resolution of the wing imaginal disc of Drosophila melanogaster. The ultimate goal of the reported effort is to produce tools for the creation of detailed 3D geometric models of the individual cells in epithelial tissues. To date, 3D volumetric surface models of the whole wing imaginal disc have been created, and the apicolateral cell boundaries have been identified, allowing for the calculation and visualization of cell parameters, e.g. apical cross-sectional area of cells. The calculation and visualization of morphological parameters show position-dependent patterns of cell shape in the wing imaginal disc. Our procedures should offer a general data processing pipeline for the construction of 3D volumetric surface models of a wide variety of epithelial tissues.

  3. Improving the visualization of 3D ultrasound data with 3D filtering

    NASA Astrophysics Data System (ADS)

    Shamdasani, Vijay; Bae, Unmin; Managuli, Ravi; Kim, Yongmin

    2005-04-01

    3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.

  4. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective

    PubMed Central

    Gillebert, Céline R.; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T.; Orban, Guy A.

    2015-01-01

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. SIGNIFICANCE STATEMENT Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied insights from fundamental visual neuroscience to analyze 3D shape perception in PCA. 3D shape-processing deficits were affected beyond what could be accounted for by lower-order processing deficits. For shading and disparity, this was related to volume loss in regions previously implicated in 3D shape processing in the intact human and nonhuman primate brain. Typical amnestic-dominant AD patients also exhibited 3D shape deficits. Advanced visual neuroscience provides insight into the pathogenesis of PCA that also bears relevance for vision in typical AD. PMID:26377458

  5. 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display.

    PubMed

    Fan, Zhencheng; Weng, Yitong; Chen, Guowen; Liao, Hongen

    2017-07-01

    Three-dimensional (3D) visualization of preoperative and intraoperative medical information becomes more and more important in minimally invasive surgery. We develop a 3D interactive surgical visualization system using mobile spatial information acquisition and autostereoscopic display for surgeons to observe surgical target intuitively. The spatial information of regions of interest (ROIs) is captured by the mobile device and transferred to a server for further image processing. Triangular patches of intraoperative data with texture are calculated with a dimension-reduced triangulation algorithm and a projection-weighted mapping algorithm. A point cloud selection-based warm-start iterative closest point (ICP) algorithm is also developed for fusion of the reconstructed 3D intraoperative image and the preoperative image. The fusion images are rendered for 3D autostereoscopic display using integral videography (IV) technology. Moreover, 3D visualization of medical image corresponding to observer's viewing direction is updated automatically using mutual information registration method. Experimental results show that the spatial position error between the IV-based 3D autostereoscopic fusion image and the actual object was 0.38±0.92mm (n=5). The system can be utilized in telemedicine, operating education, surgical planning, navigation, etc. to acquire spatial information conveniently and display surgical information intuitively. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Voxel Datacubes for 3D Visualization in Blender

    NASA Astrophysics Data System (ADS)

    Gárate, Matías

    2017-05-01

    The growth of computational astrophysics and the complexity of multi-dimensional data sets evidences the need for new versatile visualization tools for both the analysis and presentation of the data. In this work, we show how to use the open-source software Blender as a three-dimensional (3D) visualization tool to study and visualize numerical simulation results, focusing on astrophysical hydrodynamic experiments. With a datacube as input, the software can generate a volume rendering of the 3D data, show the evolution of a simulation in time, and do a fly-around camera animation to highlight the points of interest. We explain the process to import simulation outputs into Blender using the voxel data format, and how to set up a visualization scene in the software interface. This method allows scientists to perform a complementary visual analysis of their data and display their results in an appealing way, both for outreach and science presentations.

  7. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    NASA Astrophysics Data System (ADS)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  8. Evaluation of vision training using 3D play game

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Ho; Kwon, Soon-Chul; Son, Kwang-Chul; Lee, Seung-Hyun

    2015-03-01

    The present study aimed to examine the effect of the vision training, which is a benefit of watching 3D video images (3D video shooting game in this study), focusing on its accommodative facility and vergence facility. Both facilities, which are the scales used to measure human visual performance, are very important factors for man in leading comfortable and easy life. This study was conducted on 30 participants in their 20s through 30s (19 males and 11 females at 24.53 ± 2.94 years), who can watch 3D video images and play 3D game. Their accommodative and vergence facility were measured before and after they watched 2D and 3D game. It turned out that their accommodative facility improved after they played both 2D and 3D games and more improved right after they played 3D game than 2D game. Likewise, their vergence facility was proved to improve after they played both 2D and 3D games and more improved soon after they played 3D game than 2D game. In addition, it was demonstrated that their accommodative facility improved to greater extent than their vergence facility. While studies have been so far conducted on the adverse effects of 3D contents, from the perspective of human factor, on the imbalance of visual accommodation and convergence, the present study is expected to broaden the applicable scope of 3D contents by utilizing the visual benefit of 3D contents for vision training.

  9. Distributed 3D Information Visualization - Towards Integration of the Dynamic 3D Graphics and Web Services

    NASA Astrophysics Data System (ADS)

    Vucinic, Dean; Deen, Danny; Oanta, Emil; Batarilo, Zvonimir; Lacor, Chris

    This paper focuses on visualization and manipulation of graphical content in distributed network environments. The developed graphical middleware and 3D desktop prototypes were specialized for situational awareness. This research was done in the LArge Scale COllaborative decision support Technology (LASCOT) project, which explored and combined software technologies to support human-centred decision support system for crisis management (earthquake, tsunami, flooding, airplane or oil-tanker incidents, chemical, radio-active or other pollutants spreading, etc.). The performed state-of-the-art review did not identify any publicly available large scale distributed application of this kind. Existing proprietary solutions rely on the conventional technologies and 2D representations. Our challenge was to apply the "latest" available technologies, such Java3D, X3D and SOAP, compatible with average computer graphics hardware. The selected technologies are integrated and we demonstrate: the flow of data, which originates from heterogeneous data sources; interoperability across different operating systems and 3D visual representations to enhance the end-users interactions.

  10. A Case Study in Astronomical 3D Printing: The Mysterious η Carinae

    NASA Astrophysics Data System (ADS)

    Madura, Thomas I.

    2017-05-01

    Three-dimensional (3D) printing moves beyond interactive 3D graphics and provides an excellent tool for both visual and tactile learners, since 3D printing can now easily communicate complex geometries and full color information. Some limitations of interactive 3D graphics are also alleviated by 3D printable models, including issues of limited software support, portability, accessibility, and sustainability. We describe the motivations, methods, and results of our work on using 3D printing (1) to visualize and understand the η Car Homunculus nebula and central binary system and (2) for astronomy outreach and education, specifically, with visually impaired students. One new result we present is the ability to 3D print full-color models of η Car’s colliding stellar winds. We also demonstrate how 3D printing has helped us communicate our improved understanding of the detailed structure of η Car’s Homunculus nebula and central binary colliding stellar winds, and their links to each other. Attached to this article are full-color 3D printable files of both a red-blue Homunculus model and the η Car colliding stellar winds at orbital phase 1.045. 3D printing could prove to be vital to how astronomer’s reach out and share their work with each other, the public, and new audiences.

  11. Evaluation of the new restandardized Abbott Architect 25-OH Vitamin D assay in vitamin D-insufficient and vitamin D-supplemented individuals.

    PubMed

    Annema, Wijtske; Nowak, Albina; von Eckardstein, Arnold; Saleh, Lanja

    2017-09-19

    Recently, Abbott Diagnostics has restandardized the Architect 25(OH)D assay against the NIST SRM 2972. We have evaluated the analytical and clinical performance of the restandardized Architect 25(OH)D assay and compared its performance with a NIST-traceable liquid chromatography-tandem mass spectrometry (LC-MS/MS) method and the Roche total 25(OH)D assay in vitamin D-insufficient individuals before and after vitamin D 3 supplementation. Frozen serum samples were obtained from 88 healthy subjects with self-perceived fatigue and vitamin D-insufficiency <50 nmol L -1 who were randomized to receive a single 100 000 IU dose of vitamin D 3 (n = 48) or placebo (n = 40). Total 25(OH)D concentrations were measured before and 4 weeks after supplementation by the restandardized Architect 25(OH)D assay, LC-MS/MS, and Roche assay. The Architect 25(OH)D assay showed an intra- and inter-assay imprecision of <5%. Comparison of the Architect assay with the LC-MS/MS method showed a good correlation in both vitamin D-insufficient and vitamin D-supplemented subjects, however, with a negative mean bias of 17.4% and 8.9%, respectively. As compared to the Roche assay, the Abbott assay underestimated 25(OH)D results in insufficient subjects (<50 nmol L -1 ) with a mean negative bias of 17.1%, this negative bias turned into a positive bias in supplemented subjects. Overall there was a moderate agreement in classification of vitamin D-insufficient and -supplemented individuals into different vitamin D states between the Architect 25(OH)D method and LC-MS/MS. The routine use of the restandardized Architect 25(OH)D results in a slight underestimation of circulating total 25(OH)D levels at lower concentrations and thus potential misclassification of vitamin D status. © 2017 Wiley Periodicals, Inc.

  12. Real-time catheter localization and visualization using three-dimensional echocardiography

    NASA Astrophysics Data System (ADS)

    Kozlowski, Pawel; Bandaru, Raja Sekhar; D'hooge, Jan; Samset, Eigil

    2017-03-01

    Real-time three-dimensional transesophageal echocardiography (RT3D-TEE) is increasingly used during minimally invasive cardiac surgeries (MICS). In many cath labs, RT3D-TEE is already one of the requisite tools for image guidance during MICS. However, the visualization of the catheter is not always satisfactory making 3D- TEE challenging to use as the only modality for guidance. We propose a novel technique for better visualization of the catheter along with the cardiac anatomy using TEE alone - exploiting both beamforming and post processing methods. We extended our earlier method called Delay and Standard Deviation (DASD) beamforming to 3D in order to enhance specular reflections. The beam-formed image was further post-processed by the Frangi filter to segment the catheter. Multi-variate visualization techniques enabled us to render both the standard tissue and the DASD beam-formed image on a clinical ultrasound scanner simultaneously. A frame rate of 15 FPS was achieved.

  13. Introduction of 3D Printing Technology in the Classroom for Visually Impaired Students

    ERIC Educational Resources Information Center

    Jo, Wonjin; I, Jang Hee; Harianto, Rachel Ananda; So, Ji Hyun; Lee, Hyebin; Lee, Heon Ju; Moon, Myoung-Woon

    2016-01-01

    The authors investigate how 3D printing technology could be utilized for instructional materials that allow visually impaired students to have full access to high-quality instruction in history class. Researchers from the 3D Printing Group of the Korea Institute of Science and Technology (KIST) provided the Seoul National School for the Blind with…

  14. Developing a 3D Game Design Authoring Package to Assist Students' Visualization Process in Design Thinking

    ERIC Educational Resources Information Center

    Kuo, Ming-Shiou; Chuang, Tsung-Yen

    2013-01-01

    The teaching of 3D digital game design requires the development of students' meta-skills, from story creativity to 3D model construction, and even the visualization process in design thinking. The characteristics a good game designer should possess have been identified as including redesign things, creativity thinking and the ability to…

  15. Sockeye: A 3D Environment for Comparative Genomics

    PubMed Central

    Montgomery, Stephen B.; Astakhova, Tamara; Bilenky, Mikhail; Birney, Ewan; Fu, Tony; Hassel, Maik; Melsopp, Craig; Rak, Marcin; Robertson, A. Gordon; Sleumer, Monica; Siddiqui, Asim S.; Jones, Steven J.M.

    2004-01-01

    Comparative genomics techniques are used in bioinformatics analyses to identify the structural and functional properties of DNA sequences. As the amount of available sequence data steadily increases, the ability to perform large-scale comparative analyses has become increasingly relevant. In addition, the growing complexity of genomic feature annotation means that new approaches to genomic visualization need to be explored. We have developed a Java-based application called Sockeye that uses three-dimensional (3D) graphics technology to facilitate the visualization of annotation and conservation across multiple sequences. This software uses the Ensembl database project to import sequence and annotation information from several eukaryotic species. A user can additionally import their own custom sequence and annotation data. Individual annotation objects are displayed in Sockeye by using custom 3D models. Ensembl-derived and imported sequences can be analyzed by using a suite of multiple and pair-wise alignment algorithms. The results of these comparative analyses are also displayed in the 3D environment of Sockeye. By using the Java3D API to visualize genomic data in a 3D environment, we are able to compactly display cross-sequence comparisons. This provides the user with a novel platform for visualizing and comparing genomic feature organization. PMID:15123592

  16. Is EQ-5D-5L Better Than EQ-5D-3L? A Head-to-Head Comparison of Descriptive Systems and Value Sets from Seven Countries.

    PubMed

    Janssen, Mathieu F; Bonsel, Gouke J; Luo, Nan

    2018-06-01

    This study describes the first empirical head-to-head comparison of EQ-5D-3L (3L) and EQ-5D-5L (5L) value sets for multiple countries. A large multinational dataset, including 3L and 5L data for eight patient groups and a student cohort, was used to compare 3L versus 5L value sets for Canada, China, England/UK (5L/3L, respectively), Japan, The Netherlands, South Korea and Spain. We used distributional analyses and two methods exploring discriminatory power: relative efficiency as assessed by the F statistic, and an area under the curve for the receiver-operating characteristics approach. Differences in outcomes were explored by separating descriptive system effects from valuation effects, and by exploring distributional location effects. In terms of distributional evenness, efficiency of scale use and the face validity of the resulting distributions, 5L was superior, leading to an increase in sensitivity and precision in health status measurement. When compared with 5L, 3L systematically overestimated health problems and consequently underestimated utilities. This led to bias, i.e. over- or underestimations of discriminatory power. We conclude that 5L provides more precise measurement at individual and group levels, both in terms of descriptive system data and utilities. The increased sensitivity and precision of 5L is likely to be generalisable to longitudinal studies, such as in intervention designs. Hence, we recommend the use of the 5L across applications, including economic evaluation, clinical and public health studies. The evaluative framework proved to be useful in assessing preference-based instruments and might be useful for future work in the development of descriptive systems or health classifications.

  17. Graphics to H.264 video encoding for 3D scene representation and interaction on mobile devices using region of interest

    NASA Astrophysics Data System (ADS)

    Le, Minh Tuan; Nguyen, Congdu; Yoon, Dae-Il; Jung, Eun Ku; Jia, Jie; Kim, Hae-Kwang

    2007-12-01

    In this paper, we propose a method of 3D graphics to video encoding and streaming that are embedded into a remote interactive 3D visualization system for rapidly representing a 3D scene on mobile devices without having to download it from the server. In particular, a 3D graphics to video framework is presented that increases the visual quality of regions of interest (ROI) of the video by performing more bit allocation to ROI during H.264 video encoding. The ROI are identified by projection 3D objects to a 2D plane during rasterization. The system offers users to navigate the 3D scene and interact with objects of interests for querying their descriptions. We developed an adaptive media streaming server that can provide an adaptive video stream in term of object-based quality to the client according to the user's preferences and the variation of network bandwidth. Results show that by doing ROI mode selection, PSNR of test sample slightly change while visual quality of objects increases evidently.

  18. The Hologram in My Hand: How Effective is Interactive Exploration of 3D Visualizations in Immersive Tangible Augmented Reality?

    PubMed

    Bach, Benjamin; Sicat, Ronell; Beyer, Johanna; Cordeil, Maxime; Pfister, Hanspeter

    2018-01-01

    We report on a controlled user study comparing three visualization environments for common 3D exploration. Our environments differ in how they exploit natural human perception and interaction capabilities. We compare an augmented-reality head-mounted display (Microsoft HoloLens), a handheld tablet, and a desktop setup. The novel head-mounted HoloLens display projects stereoscopic images of virtual content into a user's real world and allows for interaction in-situ at the spatial position of the 3D hologram. The tablet is able to interact with 3D content through touch, spatial positioning, and tangible markers, however, 3D content is still presented on a 2D surface. Our hypothesis is that visualization environments that match human perceptual and interaction capabilities better to the task at hand improve understanding of 3D visualizations. To better understand the space of display and interaction modalities in visualization environments, we first propose a classification based on three dimensions: perception, interaction, and the spatial and cognitive proximity of the two. Each technique in our study is located at a different position along these three dimensions. We asked 15 participants to perform four tasks, each task having different levels of difficulty for both spatial perception and degrees of freedom for interaction. Our results show that each of the tested environments is more effective for certain tasks, but that generally the desktop environment is still fastest and most precise in almost all cases.

  19. Evaluating the effect of three-dimensional visualization on force application and performance time during robotics-assisted mitral valve repair.

    PubMed

    Currie, Maria E; Trejos, Ana Luisa; Rayman, Reiza; Chu, Michael W A; Patel, Rajni; Peters, Terry; Kiaii, Bob B

    2013-01-01

    The purpose of this study was to determine the effect of three-dimensional (3D) binocular, stereoscopic, and two-dimensional (2D) monocular visualization on robotics-assisted mitral valve annuloplasty versus conventional techniques in an ex vivo animal model. In addition, we sought to determine whether these effects were consistent between novices and experts in robotics-assisted cardiac surgery. A cardiac surgery test-bed was constructed to measure forces applied during mitral valve annuloplasty. Sutures were passed through the porcine mitral valve annulus by the participants with different levels of experience in robotics-assisted surgery and tied in place using both robotics-assisted and conventional surgery techniques. The mean time for both the experts and the novices using 3D visualization was significantly less than that required using 2D vision (P < 0.001). However, there was no significant difference in the maximum force applied by the novices to the mitral valve during suturing (P = 0.7) and suture tying (P = 0.6) using either 2D or 3D visualization. The mean time required and forces applied by both the experts and the novices were significantly less using the conventional surgical technique than when using the robotic system with either 2D or 3D vision (P < 0.001). Despite high-quality binocular images, both the experts and the novices applied significantly more force to the cardiac tissue during 3D robotics-assisted mitral valve annuloplasty than during conventional open mitral valve annuloplasty. This finding suggests that 3D visualization does not fully compensate for the absence of haptic feedback in robotics-assisted cardiac surgery.

  20. Automatic extraction and visualization of object-oriented software design metrics

    NASA Astrophysics Data System (ADS)

    Lakshminarayana, Anuradha; Newman, Timothy S.; Li, Wei; Talburt, John

    2000-02-01

    Software visualization is a graphical representation of software characteristics and behavior. Certain modes of software visualization can be useful in isolating problems and identifying unanticipated behavior. In this paper we present a new approach to aid understanding of object- oriented software through 3D visualization of software metrics that can be extracted from the design phase of software development. The focus of the paper is a metric extraction method and a new collection of glyphs for multi- dimensional metric visualization. Our approach utilize the extensibility interface of a popular CASE tool to access and automatically extract the metrics from Unified Modeling Language class diagrams. Following the extraction of the design metrics, 3D visualization of these metrics are generated for each class in the design, utilizing intuitively meaningful 3D glyphs that are representative of the ensemble of metrics. Extraction and visualization of design metrics can aid software developers in the early study and understanding of design complexity.

  1. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography.

    PubMed

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Ake; Winter, Reidar

    2009-08-25

    Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 +/- 3.7% and -0.2 +/- 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant.

  2. TLS for generating multi-LOD of 3D building model

    NASA Astrophysics Data System (ADS)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  3. Dedicated ultrasound speckle tracking to study tendon displacement

    NASA Astrophysics Data System (ADS)

    Korstanje, Jan-Wiebe H.; Selles, Ruud W.; Stam, Henk J.; Hovius, Steven E. R.; Bosch, Johan G.

    2009-02-01

    Ultrasound can be used to study tendon and muscle movement. However, quantization is mostly based on manual tracking of anatomical landmarks such as the musculotendinous junction, limiting the applicability to a small number of muscle-tendon units. The aim of this study is to quantify tendon displacement without employing anatomical landmarks, using dedicated speckle tracking in long B-mode image sequences. We devised a dedicated two-dimensional multikernel block-matching scheme with subpixel accuracy to handle large displacements over long sequences. Images were acquired with a Philips iE33 with a 7 MHz linear array and a VisualSonics Vevo 770 using a 40 MHz mechanical probe. We displaced the flexor digitorum superficialis of two pig cadaver forelegs with three different velocities (4,10 and 16 mm/s) over 3 distances (5, 10, 15 mm). As a reference, we manually determined the total displacement of an injected hyperechogenic bullet in the tendons. We automatically tracked tendon parts with and without markers and compared results to the true displacement. Using the iE33, mean tissue displacement underestimations for the three different velocities were 2.5 +/- 1.0%, 1.7 +/- 1.1% and 0.7 +/- 0.4%. Using the Vevo770, mean tissue displacement underestimations were 0.8 +/- 1.3%, 0.6 +/- 0.3% and 0.6 +/- 0.3%. Marker tracking displacement underestimations were only slightly smaller, showing limited tracking drift for non-marker tendon tissue as well as for markers. This study showed that our dedicated speckle tracking can quantify extensive tendon displacement with physiological velocities without anatomical landmarks with good accuracy for different types of ultrasound configurations. This technique allows tracking of a much larger range of muscle-tendon units than by using anatomical landmarks.

  4. An investigation of visual selection priority of objects with texture and crossed and uncrossed disparities

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérôme; Wyckens, Emmanuel; Le Meur, Olivier

    2014-02-01

    The aim of this research is to understand the difference in visual attention to 2D and 3D content depending on texture and amount of depth. Two experiments were conducted using an eye-tracker and a 3DTV display. Collected fixation data were used to build saliency maps and to analyze the differences between 2D and 3D conditions. In the first experiment 51 observers participated in the test. Using scenes that contained objects with crossed disparity, it was discovered that such objects are the most salient, even if observers experience discomfort due to the high level of disparity. The goal of the second experiment is to decide whether depth is a determinative factor for visual attention. During the experiment, 28 observers watched the scenes that contained objects with crossed and uncrossed disparities. We evaluated features influencing the saliency of the objects in stereoscopic conditions by using contents with low-level visual features. With univariate tests of significance (MANOVA), it was detected that texture is more important than depth for selection of objects. Objects with crossed disparity are significantly more important for selection processes when compared to 2D. However, objects with uncrossed disparity have the same influence on visual attention as 2D objects. Analysis of eyemovements indicated that there is no difference in saccade length. Fixation durations were significantly higher in stereoscopic conditions for low-level stimuli than in 2D. We believe that these experiments can help to refine existing models of visual attention for 3D content.

  5. 3D Visualization Development of SIUE Campus

    NASA Astrophysics Data System (ADS)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  6. Improved Visualization of Intracranial Vessels with Intraoperative Coregistration of Rotational Digital Subtraction Angiography and Intraoperative 3D Ultrasound

    PubMed Central

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Introduction Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. Methods We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Results Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Conclusions Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography. PMID:25803318

  7. Improved visualization of intracranial vessels with intraoperative coregistration of rotational digital subtraction angiography and intraoperative 3D ultrasound.

    PubMed

    Podlesek, Dino; Meyer, Tobias; Morgenstern, Ute; Schackert, Gabriele; Kirsch, Matthias

    2015-01-01

    Ultrasound can visualize and update the vessel status in real time during cerebral vascular surgery. We studied the depiction of parent vessels and aneurysms with a high-resolution 3D intraoperative ultrasound imaging system during aneurysm clipping using rotational digital subtraction angiography as a reference. We analyzed 3D intraoperative ultrasound in 39 patients with cerebral aneurysms to visualize the aneurysm intraoperatively and the nearby vascular tree before and after clipping. Simultaneous coregistration of preoperative subtraction angiography data with 3D intraoperative ultrasound was performed to verify the anatomical assignment. Intraoperative ultrasound detected 35 of 43 aneurysms (81%) in 39 patients. Thirty-nine intraoperative ultrasound measurements were matched with rotational digital subtraction angiography and were successfully reconstructed during the procedure. In 7 patients, the aneurysm was partially visualized by 3D-ioUS or was not in field of view. Post-clipping intraoperative ultrasound was obtained in 26 and successfully reconstructed in 18 patients (69%) despite clip related artefacts. The overlap between 3D-ioUS aneurysm volume and preoperative rDSA aneurysm volume resulted in a mean accuracy of 0.71 (Dice coefficient). Intraoperative coregistration of 3D intraoperative ultrasound data with preoperative rotational digital subtraction angiography is possible with high accuracy. It allows the immediate visualization of vessels beyond the microscopic field, as well as parallel assessment of blood velocity, aneurysm and vascular tree configuration. Although spatial resolution is lower than for standard angiography, the method provides an excellent vascular overview, advantageous interpretation of 3D-ioUS and immediate intraoperative feedback of the vascular status. A prerequisite for understanding vascular intraoperative ultrasound is image quality and a successful match with preoperative rotational digital subtraction angiography.

  8. 3D Immersive Visualization with Astrophysical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2017-01-01

    We present the refinement of a new 3D immersion technique for astrophysical data visualization.Methodology to create 360 degree spherical panoramas is reviewed. The 3D software package Blender coupled with Python and the Google Spatial Media module are used together to create the final data products. Data can be viewed interactively with a mobile phone or tablet or in a web browser. The technique can apply to different kinds of astronomical data including 3D stellar and galaxy catalogs, images, and planetary maps.

  9. Spherical Panorama Visualization of Astronomical Data with Blender and Python

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-06-01

    We describe methodology to generate 360 degree spherical panoramas of both 2D and 3D data. The techniques apply to a variety of astronomical data types - all sky maps, 2D and 3D catalogs as well as planetary surface maps. The results can be viewed in a desktop browser or interactively with a mobile phone or tablet. Static displays or panoramic video renderings of the data can be produced. We review the Python code and usage of the 3D Blender software for projecting maps onto 3D surfaces and the various tools for distributing visualizations.

  10. Visualizing planetary data by using 3D engines

    NASA Astrophysics Data System (ADS)

    Elgner, S.; Adeli, S.; Gwinner, K.; Preusker, F.; Kersten, E.; Matz, K.-D.; Roatsch, T.; Jaumann, R.; Oberst, J.

    2017-09-01

    We examined 3D gaming engines for their usefulness in visualizing large planetary image data sets. These tools allow us to include recent developments in the field of computer graphics in our scientific visualization systems and present data products interactively and in higher quality than before. We started to set up the first applications which will take use of virtual reality (VR) equipment.

  11. 3D Visualization for Phoenix Mars Lander Science Operations

    NASA Technical Reports Server (NTRS)

    Edwards, Laurence; Keely, Leslie; Lees, David; Stoker, Carol

    2012-01-01

    Planetary surface exploration missions present considerable operational challenges in the form of substantial communication delays, limited communication windows, and limited communication bandwidth. A 3D visualization software was developed and delivered to the 2008 Phoenix Mars Lander (PML) mission. The components of the system include an interactive 3D visualization environment called Mercator, terrain reconstruction software called the Ames Stereo Pipeline, and a server providing distributed access to terrain models. The software was successfully utilized during the mission for science analysis, site understanding, and science operations activity planning. A terrain server was implemented that provided distribution of terrain models from a central repository to clients running the Mercator software. The Ames Stereo Pipeline generates accurate, high-resolution, texture-mapped, 3D terrain models from stereo image pairs. These terrain models can then be visualized within the Mercator environment. The central cross-cutting goal for these tools is to provide an easy-to-use, high-quality, full-featured visualization environment that enhances the mission science team s ability to develop low-risk productive science activity plans. In addition, for the Mercator and Viz visualization environments, extensibility and adaptability to different missions and application areas are key design goals.

  12. Enhanced visualization of MR angiogram with modified MIP and 3D image fusion

    NASA Astrophysics Data System (ADS)

    Kim, JongHyo; Yeon, Kyoung M.; Han, Man Chung; Lee, Dong Hyuk; Cho, Han I.

    1997-05-01

    We have developed a 3D image processing and display technique that include image resampling, modification of MIP, volume rendering, and fusion of MIP image with volumetric rendered image. This technique facilitates the visualization of the 3D spatial relationship between vasculature and surrounding organs by overlapping the MIP image on the volumetric rendered image of the organ. We applied this technique to a MR brain image data to produce an MRI angiogram that is overlapped with 3D volume rendered image of brain. MIP technique was used to visualize the vasculature of brain, and volume rendering was used to visualize the other structures of brain. The two images are fused after adjustment of contrast and brightness levels of each image in such a way that both the vasculature and brain structure are well visualized either by selecting the maximum value of each image or by assigning different color table to each image. The resultant image with this technique visualizes both the brain structure and vasculature simultaneously, allowing the physicians to inspect their relationship more easily. The presented technique will be useful for surgical planning for neurosurgery.

  13. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wong, S.T.C.

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound,more » electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.« less

  14. A Three Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents

    DTIC Science & Technology

    2006-10-01

    Pronged Approach for Improved Data Understanding: 3-D Visualization, Use of Gaming Techniques, and Intelligent Advisory Agents. In Visualising Network...University at the start of each fall semester, when numerous new students arrive on campus and begin downloading extensive amounts of audio and...SIGGRAPH ’92 • C. Cruz-Neira, D.J. Sandin, T.A. DeFanti, R.V. Kenyon and J.C. Hart, "The CAVE: Audio Visual Experience Automatic Virtual Environment

  15. GeoBuilder: a geometric algorithm visualization and debugging system for 2D and 3D geometric computing.

    PubMed

    Wei, Jyh-Da; Tsai, Ming-Hung; Lee, Gen-Cher; Huang, Jeng-Hung; Lee, Der-Tsai

    2009-01-01

    Algorithm visualization is a unique research topic that integrates engineering skills such as computer graphics, system programming, database management, computer networks, etc., to facilitate algorithmic researchers in testing their ideas, demonstrating new findings, and teaching algorithm design in the classroom. Within the broad applications of algorithm visualization, there still remain performance issues that deserve further research, e.g., system portability, collaboration capability, and animation effect in 3D environments. Using modern technologies of Java programming, we develop an algorithm visualization and debugging system, dubbed GeoBuilder, for geometric computing. The GeoBuilder system features Java's promising portability, engagement of collaboration in algorithm development, and automatic camera positioning for tracking 3D geometric objects. In this paper, we describe the design of the GeoBuilder system and demonstrate its applications.

  16. Visual and visually mediated haptic illusions with Titchener's ⊥.

    PubMed

    Landwehr, Klaus

    2014-05-01

    For a replication and expansion of a previous experiment of mine, 14 newly recruited participants provided haptic and verbal estimates of the lengths of the two lines that make up Titchener's ⊥. The stimulus was presented at two different orientations (frontoparallel vs. horizontal) and rotated in steps of 45 deg around 2π. Haptically, the divided line of the ⊥ was generally underestimated, especially at a horizontal orientation. Verbal judgments also differed according to presentation condition and to which line was the target, with the overestimation of the undivided line ranging between 6.2 % and 15.3 %. The results are discussed with reference to the two-visual-systems theory of perception and action, neuroscientific accounts, and also recent historical developments (the use of handheld touchscreens, in particular), because the previously reported "haptic induction effect" (the scaling of haptic responses to the divided line of the ⊥, depending on the length of the undivided one) did not replicate.

  17. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    NASA Astrophysics Data System (ADS)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  18. Seeing music: The perception of melodic 'ups and downs' modulates the spatial processing of visual stimuli.

    PubMed

    Romero-Rivas, Carlos; Vera-Constán, Fátima; Rodríguez-Cuadrado, Sara; Puigcerver, Laura; Fernández-Prieto, Irune; Navarra, Jordi

    2018-05-10

    Musical melodies have "peaks" and "valleys". Although the vertical component of pitch and music is well-known, the mechanisms underlying its mental representation still remain elusive. We show evidence regarding the importance of previous experience with melodies for crossmodal interactions to emerge. The impact of these crossmodal interactions on other perceptual and attentional processes was also studied. Melodies including two tones with different frequency (e.g., E4 and D3) were repeatedly presented during the study. These melodies could either generate strong predictions (e.g., E4-D3-E4-D3-E4-[D3]) or not (e.g., E4-D3-E4-E4-D3-[?]). After the presentation of each melody, the participants had to judge the colour of a visual stimulus that appeared in a position that was, according to the traditional vertical connotations of pitch, either congruent (e.g., high-low-high-low-[up]), incongruent (high-low-high-low-[down]) or unpredicted with respect to the melody. Behavioural and electroencephalographic responses to the visual stimuli were obtained. Congruent visual stimuli elicited faster responses at the end of the experiment than at the beginning. Additionally, incongruent visual stimuli that broke the spatial prediction generated by the melody elicited larger P3b amplitudes (reflecting 'surprise' responses). Our results suggest that the passive (but repeated) exposure to melodies elicits spatial predictions that modulate the processing of other sensory events. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Correlation between ICDAS and histology: Differences between stereomicroscopy and microradiography with contrast solution as histological techniques

    PubMed Central

    Campos, Samara de Azevedo Gomes; Vieira, Maria Lúcia Oliveira

    2017-01-01

    Detection of occlusal caries with visual examination using ICDAS correlates strongly with histology under stereomicroscopy (SM), but dentin aspects under SM are ambiguous regarding mineral content. Thus, our aim was to test two null hypotheses: SM and microradiography result in similar correlations between ICDAS and histology; SM and microradiography result in similar positive (PPV) and negative predictive values (NPV) of ICDAS cut-off 1–2 (scores 0–2 as sound) with histological threshold D3 (demineralization in the inner third of dentin). Occlusal surfaces of extracted permanent teeth (n = 115) were scored using ICDAS. Undemineralized ground sections were histologically scored using both SM without contrast solution and microradiography after immersion in Thoulet’s solution 1.47 for 24 h (MRC). Correlation between ICDAS and histology differed from SM (0.782) to MRC (0.511) (p = 0.0002), with a large effect size “q” of 0.49 (95% CI: 0.638/0.338). For ICDAS cut-off 1–2 and D3, PPV from MRC (0.56) was higher than that from SM (0.28) (p< 0.00001; effect size h = 0.81), and NPV from MRC (0.72) was lower than that from SM (1,00) (p < 0.00001; effect size h = 1.58). In conclusion, SM overestimated the correlation between ICDAS and lesion depth, and underestimated the number of occlusal surfaces with ICDAS cut-off 1–2 and deep dentin demineralization. PMID:28841688

  20. Preoperative evaluation of venous systems with 3-dimensional contrast-enhanced magnetic resonance venography in brain tumors: comparison with time-of-flight magnetic resonance venography and digital subtraction angiography.

    PubMed

    Lee, Jong-Myung; Jung, Shin; Moon, Kyung-Sub; Seo, Jeong-Jin; Kim, In-Young; Jung, Tae-Young; Lee, Jung-Kil; Kang, Sam-Suk

    2005-08-01

    Recent developments in magnetic resonance (MR) technology now enable the use of MR venography, providing 3-dimensional (3D) images of intracranial venous structures. The purpose of this study was to assess the usefulness of 3D contrast-enhanced MR venography (CE MRV) in the evaluation of intracranial venous system for surgical planning of brain tumors. Forty patients underwent 3D CE MRV, as well as 25 patients, 2-dimensional (2D) time-of-flight (TOF) MR venography in axial and sagittal planes; and 10 patients, digital subtraction angiography. We determined the number of visualized sinuses and cortical veins. Degree of visualization of the intracranial venous system on 3D CE MRV was compared with that of 2D TOF MR venography and digital subtraction angiography as a standard. We also assessed the value of 3D CE MRV in the investigation of sinus occlusion or localization of cortical draining veins preoperatively. Superficial cortical veins and the dural sinus were better visualized on 3D CE MRV than on 2D TOF MR venography. Both MR venographic techniques visualized superior sagittal sinus, lateral sinus, sigmoid sinus, straight sinus, and internal cerebral vein and provided more detailed information by showing obstructed sinuses in brain tumors. Only 3D CE MRV showed superficial cortical draining veins. However, it was difficult to accurately evaluate the presence of cortical collateral venous drainage. Although we do not yet advocate MR venography to replace conventional angiography as the imaging standard for brain tumors, 3D CE MRV can be regarded as a valuable diagnostic method just in evaluating the status of major sinuses and localization of the cortical draining veins.

  1. Enabling Real-Time Volume Rendering of Functional Magnetic Resonance Imaging on an iOS Device.

    PubMed

    Holub, Joseph; Winer, Eliot

    2017-12-01

    Powerful non-invasive imaging technologies like computed tomography (CT), ultrasound, and magnetic resonance imaging (MRI) are used daily by medical professionals to diagnose and treat patients. While 2D slice viewers have long been the standard, many tools allowing 3D representations of digital medical data are now available. The newest imaging advancement, functional MRI (fMRI) technology, has changed medical imaging from viewing static to dynamic physiology (4D) over time, particularly to study brain activity. Add this to the rapid adoption of mobile devices for everyday work and the need to visualize fMRI data on tablets or smartphones arises. However, there are few mobile tools available to visualize 3D MRI data, let alone 4D fMRI data. Building volume rendering tools on mobile devices to visualize 3D and 4D medical data is challenging given the limited computational power of the devices. This paper describes research that explored the feasibility of performing real-time 3D and 4D volume raycasting on a tablet device. The prototype application was tested on a 9.7" iPad Pro using two different fMRI datasets of brain activity. The results show that mobile raycasting is able to achieve between 20 and 40 frames per second for traditional 3D datasets, depending on the sampling interval, and up to 9 frames per second for 4D data. While the prototype application did not always achieve true real-time interaction, these results clearly demonstrated that visualizing 3D and 4D digital medical data is feasible with a properly constructed software framework.

  2. Intermediate Cognitive Phenotypes in Bipolar Disorder

    PubMed Central

    Langenecker, Scott A.; Saunders, Erika F.H.; Kade, Allison M.; Ransom, Michael T.; McInnis, Melvin G.

    2013-01-01

    Background Intermediate cognitive phenotypes (ICPs) are measurable and quantifiable states that may be objectively assessed in a standardized method, and can be integrated into association studies, including genetic, biochemical, clinical, and imaging based correlates. The present study used neuropsychological measures as ICPs, with factor scores in executive functioning, attention, memory, fine motor function, and emotion processing, similar to prior work in schizophrenia. Methods Healthy control subjects (HC, n=34) and euthymic (E, n=66), depressed (D, n=43), or hypomanic/mixed (HM, n=13) patients with bipolar disorder (BD) were assessed with neuropsychological tests. These were from eight domains consistent with previous literature; auditory memory, visual memory, processing speed with interference resolution, verbal fluency and processing speed, conceptual reasoning and set-shifting, inhibitory control, emotion processing, and fine motor dexterity. Results Of the eight factor scores, the HC group outperformed the E group in three (Processing Speed with Interference Resolution, Visual Memory, Fine Motor Dexterity), the D group in seven (all except Inhibitory Control), and the HM group in four (Inhibitory Control, Processing Speed with Interference Resolution, Fine Motor Dexterity, and Auditory Memory). Limitations The HM group was relatively small, thus effects of this phase of illness may have been underestimated. Effects of medication could not be fully controlled without a randomized, double-blind, placebo-controlled study. Conclusions Use of the factor scores can assist in determining ICPs for BD and related disorders, and may provide more specific targets for development of new treatments. We highlight strong ICPs (Processing Speed with Interference Resolution, Visual Memory, Fine Motor Dexterity) for further study, consistent with the existing literature. PMID:19800130

  3. Energy Requirement Assessment and Water Turnover in Japanese College Wrestlers Using the Doubly Labeled Water Method.

    PubMed

    Sagayama, Hiroyuki; Kondo, Emi; Shiose, Keisuke; Yamada, Yosuke; Motonaga, Keiko; Ouchi, Shiori; Kamei, Akiko; Osawa, Takuya; Nakajima, Kohei; Takahashi, Hideyuki; Higaki, Yasuki; Tanaka, Hiroaki

    2017-01-01

    Estimated energy requirements (EERs) are important for sports based on body weight classifications to aid in weight management. The basis for establishing EERs varies and includes self-reported energy intake (EI), predicted energy expenditure, and measured daily energy expenditure. Currently, however, no studies have been performed with male wrestlers using the highly accurate and precise doubly labeled water (DLW) method to estimate energy and fluid requirement. The primary aim of this study was to compare total energy expenditure (TEE), self-reported EI, and the difference in collegiate wrestlers during a normal training period using the DLW method. The secondary aims were to measure the water turnover and the physical activity level (PAL) of the athletes, and to examine the accuracy of two currently used equations to predict EER. Ten healthy males (age, 20.4±0.5 y) belonging to the East-Japan college league participated in this study. TEE was measured using the DLW method, and EI was assessed with self-reported dietary records for ~1 wk. There was a significant difference between TEE (17.9±2.5 MJ•d -1 [4,283±590 kcal•d -1 ]) and self-reported EI (14.4±3.3 MJ•d -1 [3,446±799 kcal•d -1 ]), a difference of 19%. The water turnover was 4.61±0.73 L•d -1 . The measured PAL (2.6±0.3) was higher than two predicted values during the training season and thus the two EER prediction equations produced underestimated values relative to DLW. We found that previous EERs were underestimating requirements in collegiate wrestlers and that those estimates should be revised.

  4. D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Koeva, M. N.

    2016-06-01

    Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.

  5. Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"

    ERIC Educational Resources Information Center

    Wu, Bing; Klatzky, Roberta L.; Stetten, George

    2010-01-01

    The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…

  6. Bedside assistance in freehand ultrasonic diagnosis by real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion

    NASA Astrophysics Data System (ADS)

    Fukuzawa, M.; Kawata, K.; Nakamori, N.; Kitsunezuka, Y.

    2011-03-01

    By real-time visual feedback of 3D scatter diagram of pulsatile tissue-motion, freehand ultrasonic diagnosis of neonatal ischemic diseases has been assisted at the bedside. The 2D ultrasonic movie was taken with a conventional ultrasonic apparatus (ATL HDI5000) and ultrasonic probes of 5-7 MHz with the compact tilt-sensor to measure the probe orientation. The real-time 3D visualization was realized by developing an extended version of the PC-based visualization system. The software was originally developed on the DirectX platform and optimized with the streaming SIMD extensions. The 3D scatter diagram of the latest pulsatile tissues has been continuously generated and visualized as projection image with the ultrasonic movie in the current section more than 15 fps. It revealed the 3D structure of pulsatile tissues such as middle and posterior cerebral arteries, Willis ring and cerebellar arteries, in which pediatricians have great interests in the blood flow because asphyxiated and/or low-birth-weight neonates have a high risk of ischemic diseases such as hypoxic-ischemic encephalopathy and periventricular leukomalacia. Since the pulsatile tissue-motion is due to local blood flow, it can be concluded that the system developed in this work is very useful to assist freehand ultrasonic diagnosis of ischemic diseases in the neonatal cranium.

  7. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    NASA Astrophysics Data System (ADS)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  8. Energy Systems Integration News | Energy Systems Integration Facility |

    Science.gov Websites

    NREL group of children in front of a 3D visualization screen. Students from the OpenWorld Learning group interact with a wind turbine wind velocity simulation at the 3D visualization lab at the

  9. Subjective experiences of watching stereoscopic Avatar and U2 3D in a cinema

    NASA Astrophysics Data System (ADS)

    Pölönen, Monika; Salmimaa, Marja; Takatalo, Jari; Häkkinen, Jukka

    2012-01-01

    A stereoscopic 3-D version of the film Avatar was shown to 85 people who subsequently answered questions related to sickness, visual strain, stereoscopic image quality, and sense of presence. Viewing Avatar for 165 min induced some symptoms of visual strain and sickness, but the symptom levels remained low. A comparison between Avatar and previously published results for the film U2 3D showed that sickness and visual strain levels were similar despite the films' runtimes. The genre of the film had a significant effect on the viewers' opinions and sense of presence. Avatar, which has been described as a combination of action, adventure, and sci-fi genres, was experienced as more immersive and engaging than the music documentary U2 3D. However, participants in both studies were immersed, focused, and absorbed in watching the stereoscopic 3-D (S3-D) film and were pleased with the film environments. The results also showed that previous stereoscopic 3-D experience significantly reduced the amount of reported eye strain and complaints about the weight of the viewing glasses.

  10. Breast sentinel lymph node navigation with three-dimensional computed tomography-lymphography: a 12-year study.

    PubMed

    Yamamoto, Shigeru; Suga, Kazuyoshi; Maeda, Kazunari; Maeda, Noriko; Yoshimura, Kiyoshi; Oka, Masaaki

    2016-05-01

    To evaluate the utility of three-dimensional (3D) computed tomography (CT)-lymphography (LG) breast sentinel lymph node navigation in our institute. Between 2002 and 2013, we preoperatively identified sentinel lymph nodes (SLNs) in 576 clinically node-negative breast cancer patients with T1 and T2 breast cancer using 3D CT-LG method. SLN biopsy (SLNB) was performed in 557 of 576 patients using both the images of 3D CT-LG for guidance and the blue dye method. Using 3D CT-LG, SLNs were visualized in 569 (99%) of 576 patients. Of 569 patients, both lymphatic draining ducts and SLNs from the peritumoral and periareolar areas were visualized in 549 (96%) patients. Only SLNs without lymphatic draining ducts were visualized in 20 patients. Drainage lymphatic pathways visualized with 3D CT-LG (549 cases) were classified into four patterns: single route/single SLN (355 cases, 65%), multiple routes/single SLN (59 cases, 11%) single route/multiple SLNs (62 cases, 11%) and multiple routes/multiple SLNs (73 cases, 13%). SLNs were detected in 556 (99.8%) of 557 patients during SLNB. CT-LG is useful for preoperative visualization of SLNs and breast lymphatic draining routes. This preoperative method should contribute greatly to the easy detection of SLNs during SLNB.

  11. Observational Constraints on Glyoxal Production from Isoprene Oxidation and Its Contribution to Organic Aerosol over the Southeast United States

    NASA Technical Reports Server (NTRS)

    Li, Jingyi; Mao, Jingqiu; Min, Kyung-Eun; Washenfelder, Rebecca A.; Brown, Steven S.; Kaiser, Jennifer; Keutsch, Frank N.; Volkamer, Rainer; Wolfe, Glenn M.; Hanisco, Thomas F.

    2016-01-01

    We use a 0-D photochemical box model and a 3-D global chemistry-climate model, combined with observations from the NOAA Southeast Nexus (SENEX) aircraft campaign, to understand the sources and sinks of glyoxal over the Southeast United States. Box model simulations suggest a large difference in glyoxal production among three isoprene oxidation mechanisms (AM3ST, AM3B, and Master Chemical Mechanism (MCM) v3.3.1). These mechanisms are then implemented into a 3-D global chemistry-climate model. Comparison with field observations shows that the average vertical profile of glyoxal is best reproduced by AM3ST with an effective reactive uptake coefficient gamma(sub glyx) of 2 x 10(exp -3) and AM3B without heterogeneous loss of glyoxal. The two mechanisms lead to 0-0.8micrograms m(exp -3) secondary organic aerosol (SOA) from glyoxal in the boundary layer of the Southeast U.S. in summer. We consider this to be the lower limit for the contribution of glyoxal to SOA, as other sources of glyoxal other than isoprene are not included in our model. In addition, we find that AM3B shows better agreement on both formaldehyde and the correlation between glyoxal and formaldehyde (RGF[GLYX]/[HCHO]), resulting from the suppression of delta-isoprene peroxy radicals (delta-ISOPO2). We also find that MCM v3.3.1 may underestimate glyoxal production from isoprene oxidation, in part due to an underestimated yield from the reaction of isoprene epoxydiol (IEPOX) peroxy radicals with HO2. Our work highlights that the gas-phase production of glyoxal represents a large uncertainty in quantifying its contribution to SOA.

  12. Observational constraints on glyoxal production from isoprene oxidation and its contribution to organic aerosol over the Southeast United States

    NASA Astrophysics Data System (ADS)

    Li, Jingyi; Mao, Jingqiu; Min, Kyung-Eun; Washenfelder, Rebecca A.; Brown, Steven S.; Kaiser, Jennifer; Keutsch, Frank N.; Volkamer, Rainer; Wolfe, Glenn M.; Hanisco, Thomas F.; Pollack, Ilana B.; Ryerson, Thomas B.; Graus, Martin; Gilman, Jessica B.; Lerner, Brian M.; Warneke, Carsten; de Gouw, Joost A.; Middlebrook, Ann M.; Liao, Jin; Welti, André; Henderson, Barron H.; McNeill, V. Faye; Hall, Samuel R.; Ullmann, Kirk; Donner, Leo J.; Paulot, Fabien; Horowitz, Larry W.

    2016-08-01

    We use a 0-D photochemical box model and a 3-D global chemistry-climate model, combined with observations from the NOAA Southeast Nexus (SENEX) aircraft campaign, to understand the sources and sinks of glyoxal over the Southeast United States. Box model simulations suggest a large difference in glyoxal production among three isoprene oxidation mechanisms (AM3ST, AM3B, and Master Chemical Mechanism (MCM) v3.3.1). These mechanisms are then implemented into a 3-D global chemistry-climate model. Comparison with field observations shows that the average vertical profile of glyoxal is best reproduced by AM3ST with an effective reactive uptake coefficient γglyx of 2 × 10-3 and AM3B without heterogeneous loss of glyoxal. The two mechanisms lead to 0-0.8 µg m-3 secondary organic aerosol (SOA) from glyoxal in the boundary layer of the Southeast U.S. in summer. We consider this to be the lower limit for the contribution of glyoxal to SOA, as other sources of glyoxal other than isoprene are not included in our model. In addition, we find that AM3B shows better agreement on both formaldehyde and the correlation between glyoxal and formaldehyde (RGF = [GLYX]/[HCHO]), resulting from the suppression of δ-isoprene peroxy radicals. We also find that MCM v3.3.1 may underestimate glyoxal production from isoprene oxidation, in part due to an underestimated yield from the reaction of isoprene epoxydiol (IEPOX) peroxy radicals with HO2. Our work highlights that the gas-phase production of glyoxal represents a large uncertainty in quantifying its contribution to SOA.

  13. Observational constraints on glyoxal production from isoprene oxidation and its contribution to organic aerosol over the Southeast United States

    PubMed Central

    Li, Jingyi; Mao, Jingqiu; Min, Kyung-Eun; Washenfelder, Rebecca A.; Brown, Steven S.; Kaiser, Jennifer; Keutsch, Frank N.; Volkamer, Rainer; Wolfe, Glenn M.; Hanisco, Thomas F.; Pollack, Ilana B.; Ryerson, Thomas B.; Graus, Martin; Gilman, Jessica B.; Lerner, Brian M.; Warneke, Carsten; de Gouw, Joost A.; Middlebrook, Ann M.; Liao, Jin; Welti, André; Henderson, Barron H.; McNeill, V. Faye; Hall, Samuel R.; Ullmann, Kirk; Donner, Leo J.; Paulot, Fabien; Horowitz, Larry W.

    2018-01-01

    We use a 0-D photochemical box model and a 3-D global chemistry-climate model, combined with observations from the NOAA Southeast Nexus (SENEX) aircraft campaign, to understand the sources and sinks of glyoxal over the Southeast United States. Box model simulations suggest a large difference in glyoxal production among three isoprene oxidation mechanisms (AM3ST, AM3B, and MCM v3.3.1). These mechanisms are then implemented into a 3-D global chemistry-climate model. Comparison with field observations shows that the average vertical profile of glyoxal is best reproduced by AM3ST with an effective reactive uptake coefficient γglyx of 2 × 10−3, and AM3B without heterogeneous loss of glyoxal. The two mechanisms lead to 0–0.8 μg m−3 secondary organic aerosol (SOA) from glyoxal in the boundary layer of the Southeast U.S. in summer. We consider this to be the lower limit for the contribution of glyoxal to SOA, as other sources of glyoxal other than isoprene are not included in our model. In addition, we find that AM3B shows better agreement on both formaldehyde and the correlation between glyoxal and formaldehyde (RGF = [GLYX]/[HCHO]), resulting from the suppression of δ-isoprene peroxy radicals (δ-ISOPO2). We also find that MCM v3.3.1 may underestimate glyoxal production from isoprene oxidation, in part due to an underestimated yield from the reaction of IEPOX peroxy radicals (IEPOXOO) with HO2. Our work highlights that the gas-phase production of glyoxal represents a large uncertainty in quantifying its contribution to SOA. PMID:29619286

  14. Observational constraints on glyoxal production from isoprene oxidation and its contribution to organic aerosol over the Southeast United States.

    PubMed

    Li, Jingyi; Mao, Jingqiu; Min, Kyung-Eun; Washenfelder, Rebecca A; Brown, Steven S; Kaiser, Jennifer; Keutsch, Frank N; Volkamer, Rainer; Wolfe, Glenn M; Hanisco, Thomas F; Pollack, Ilana B; Ryerson, Thomas B; Graus, Martin; Gilman, Jessica B; Lerner, Brian M; Warneke, Carsten; de Gouw, Joost A; Middlebrook, Ann M; Liao, Jin; Welti, André; Henderson, Barron H; McNeill, V Faye; Hall, Samuel R; Ullmann, Kirk; Donner, Leo J; Paulot, Fabien; Horowitz, Larry W

    2016-08-27

    We use a 0-D photochemical box model and a 3-D global chemistry-climate model, combined with observations from the NOAA Southeast Nexus (SENEX) aircraft campaign, to understand the sources and sinks of glyoxal over the Southeast United States. Box model simulations suggest a large difference in glyoxal production among three isoprene oxidation mechanisms (AM3ST, AM3B, and MCM v3.3.1). These mechanisms are then implemented into a 3-D global chemistry-climate model. Comparison with field observations shows that the average vertical profile of glyoxal is best reproduced by AM3ST with an effective reactive uptake coefficient γ glyx of 2 × 10 -3 , and AM3B without heterogeneous loss of glyoxal. The two mechanisms lead to 0-0.8 μg m -3 secondary organic aerosol (SOA) from glyoxal in the boundary layer of the Southeast U.S. in summer. We consider this to be the lower limit for the contribution of glyoxal to SOA, as other sources of glyoxal other than isoprene are not included in our model. In addition, we find that AM3B shows better agreement on both formaldehyde and the correlation between glyoxal and formaldehyde ( R GF = [GLYX]/[HCHO]), resulting from the suppression of δ-isoprene peroxy radicals (δ-ISOPO 2 ). We also find that MCM v3.3.1 may underestimate glyoxal production from isoprene oxidation, in part due to an underestimated yield from the reaction of IEPOX peroxy radicals (IEPOXOO) with HO 2 . Our work highlights that the gas-phase production of glyoxal represents a large uncertainty in quantifying its contribution to SOA.

  15. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  16. An overview of 3D software visualization.

    PubMed

    Teyseyre, Alfredo R; Campo, Marcelo R

    2009-01-01

    Software visualization studies techniques and methods for graphically representing different aspects of software. Its main goal is to enhance, simplify and clarify the mental representation a software engineer has of a computer system. During many years, visualization in 2D space has been actively studied, but in the last decade, researchers have begun to explore new 3D representations for visualizing software. In this article, we present an overview of current research in the area, describing several major aspects like: visual representations, interaction issues, evaluation methods and development tools. We also perform a survey of some representative tools to support different tasks, i.e., software maintenance and comprehension, requirements validation and algorithm animation for educational purposes, among others. Finally, we conclude identifying future research directions.

  17. Advanced Visualization of Experimental Data in Real Time Using LiveView3D

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    LiveView3D is a software application that imports and displays a variety of wind tunnel derived data in an interactive virtual environment in real time. LiveView3D combines the use of streaming video fed into a three-dimensional virtual representation of the test configuration with networked communications to the test facility Data Acquisition System (DAS). This unified approach to real time data visualization provides a unique opportunity to comprehend very large sets of diverse forms of data in a real time situation, as well as in post-test analysis. This paper describes how LiveView3D has been implemented to visualize diverse forms of aerodynamic data gathered during wind tunnel experiments, most notably at the NASA Langley Research Center Unitary Plan Wind Tunnel (UPWT). Planned future developments of the LiveView3D system are also addressed.

  18. Recent results in visual servoing

    NASA Astrophysics Data System (ADS)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  19. Pot/Lid Illusion

    PubMed Central

    Kennedy, John M.

    2016-01-01

    A new everyday visual size illusion is presented—the Pot/Lid illusion. Observers choose an unduly large lid for a pot. We ask whether the optic slant of the pot brim would increase its apparent size or if vision underestimates the size of tilted lids. PMID:27698990

  20. Scientific Visualization Made Easy for the Scientist

    NASA Astrophysics Data System (ADS)

    Westerhoff, M.; Henderson, B.

    2002-12-01

    amirar is an application program used in creating 3D visualizations and geometric models of 3D image data sets from various application areas, e.g. medicine, biology, biochemistry, chemistry, physics, and engineering. It has demonstrated significant adoption in the market place since becoming commercially available in 2000. The rapid adoption has expanded the features being requested by the user base and broadened the scope of the amira product offering. The amira product offering includes amira Standard, amiraDevT, used to extend the product capabilities by users, amiraMolT, used for molecular visualization, amiraDeconvT, used to improve quality of image data, and amiraVRT, used in immersive VR environments. amira allows the user to construct a visualization tailored to his or her needs without requiring any programming knowledge. It also allows 3D objects to be represented as grids suitable for numerical simulations, notably as triangular surfaces and volumetric tetrahedral grids. The amira application also provides methods to generate such grids from voxel data representing an image volume, and it includes a general-purpose interactive 3D viewer. amiraDev provides an application-programming interface (API) that allows the user to add new components by C++ programming. amira supports many import formats including a 'raw' format allowing immediate access to your native uniform data sets. amira uses the power and speed of the OpenGLr and Open InventorT graphics libraries and 3D graphics accelerators to allow you to access over 145 modules, enabling you to process, probe, analyze and visualize your data. The amiraMolT extension adds powerful tools for molecular visualization to the existing amira platform. amiraMolT contains support for standard molecular file formats, tools for visualization and analysis of static molecules as well as molecular trajectories (time series). amiraDeconv adds tools for the deconvolution of 3D microscopic images. Deconvolution is the process of increasing image quality and resolution by computationally compensating artifacts of the recording process. amiraDeconv supports 3D wide field microscopy as well as 3D confocal microscopy. It offers both non-blind and blind image deconvolution algorithms. Non-blind deconvolution uses an individual measured point spread function, while non-blind algorithms work on the basis of only a few recording parameters (like numerical aperture or zoom factor). amiraVR is a specialized and extended version of the amira visualization system which is dedicated for use in immersive installations, such as large-screen stereoscopic projections, CAVEr or Holobenchr systems. Among others, it supports multi-threaded multi-pipe rendering, head-tracking, advanced 3D interaction concepts, and 3D menus allowing interaction with any amira object in the same way as on the desktop. With its unique set of features, amiraVR represents both a VR (Virtual Reality) ready application for scientific and medical visualization in immersive environments, and a development platform that allows building VR applications.

  1. The quasiparticle band structure of zincblende and rocksalt ZnO.

    PubMed

    Dixit, H; Saniz, R; Lamoen, D; Partoens, B

    2010-03-31

    We present the quasiparticle band structure of ZnO in its zincblende (ZB) and rocksalt (RS) phases at the Γ point, calculated within the GW approximation. The effect of the p-d hybridization on the quasiparticle corrections to the band gap is discussed. We compare three systems, ZB-ZnO which shows strong p-d hybridization and has a direct band gap, RS-ZnO which is also hybridized but includes inversion symmetry and therefore has an indirect band gap, and ZB-ZnS which shows a weaker hybridization due to a change of the chemical species from oxygen to sulfur. The quasiparticle corrections are calculated with different numbers of valence electrons in the Zn pseudopotential. We find that the Zn(20+) pseudopotential is essential for the adequate treatment of the exchange interaction in the self-energy. The calculated GW band gaps are 2.47 eV and 4.27 eV respectively, for the ZB and RS phases. The ZB-ZnO band gap is underestimated compared to the experimental value of 3.27 by ∼ 0.8 eV. The RS-ZnO band gap compares well with the experimental value of 4.5 eV. The underestimation for ZB-ZnO is correlated with the strong p-d hybridization. The GW band gap for ZnS is 3.57 eV, compared to the experimental value of 3.8 eV.

  2. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    PubMed Central

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentation environments; traditional 2D non-immersive (2DNI), 3D non-immersive (3DNI – anaglyphic glasses), and 3DI (head mounted display with position and head orientation tracking). In Experiment 2, we examined how the use of different backgrounds affected MR processes within the 3DI environment. In Experiment 3, we compared electroencephalogram data recorded while participants were mentally rotating visual-spatial images presented in 3DI vs. 2DNI environments. Overall, the findings of the three experiments suggest that visual-spatial processing is different in immersive and non-immersive environments, and that immersive environments may require different image encoding and transformation strategies than the two other non-immersive environments. Specifically, in a non-immersive environment, participants may utilize a scene-based frame of reference and allocentric encoding whereas immersive environments may encourage the use of a viewer-centered frame of reference and egocentric encoding. These findings also suggest that MR performed in laboratory conditions using a traditional 2D computer screen may not reflect spatial processing as it would occur in the real world. PMID:22908003

  3. A Unified Air-Sea Visualization System: Survey on Gridding Structures

    NASA Technical Reports Server (NTRS)

    Anand, Harsh; Moorhead, Robert

    1995-01-01

    The goal is to develop a Unified Air-Sea Visualization System (UASVS) to enable the rapid fusion of observational, archival, and model data for verification and analysis. To design and develop UASVS, modelers were polled to determine the gridding structures and visualization systems used, and their needs with respect to visual analysis. A basic UASVS requirement is to allow a modeler to explore multiple data sets within a single environment, or to interpolate multiple datasets onto one unified grid. From this survey, the UASVS should be able to visualize 3D scalar/vector fields; render isosurfaces; visualize arbitrary slices of the 3D data; visualize data defined on spectral element grids with the minimum number of interpolation stages; render contours; produce 3D vector plots and streamlines; provide unified visualization of satellite images, observations and model output overlays; display the visualization on a projection of the users choice; implement functions so the user can derive diagnostic values; animate the data to see the time-evolution; animate ocean and atmosphere at different rates; store the record of cursor movement, smooth the path, and animate a window around the moving path; repeatedly start and stop the visual time-stepping; generate VHS tape animations; work on a variety of workstations; and allow visualization across clusters of workstations and scalable high performance computer systems.

  4. Forecasting and visualization of wildfires in a 3D geographical information system

    NASA Astrophysics Data System (ADS)

    Castrillón, M.; Jorge, P. A.; López, I. J.; Macías, A.; Martín, D.; Nebot, R. J.; Sabbagh, I.; Quintana, F. M.; Sánchez, J.; Sánchez, A. J.; Suárez, J. P.; Trujillo, A.

    2011-03-01

    This paper describes a wildfire forecasting application based on a 3D virtual environment and a fire simulation engine. A novel open-source framework is presented for the development of 3D graphics applications over large geographic areas, offering high performance 3D visualization and powerful interaction tools for the Geographic Information Systems (GIS) community. The application includes a remote module that allows simultaneous connections of several users for monitoring a real wildfire event. The system is able to make a realistic composition of what is really happening in the area of the wildfire with dynamic 3D objects and location of human and material resources in real time, providing a new perspective to analyze the wildfire information. The user is enabled to simulate and visualize the propagation of a fire on the terrain integrating at the same time spatial information on topography and vegetation types with weather and wind data. The application communicates with a remote web service that is in charge of the simulation task. The user may specify several parameters through a friendly interface before the application sends the information to the remote server responsible of carrying out the wildfire forecasting using the FARSITE simulation model. During the process, the server connects to different external resources to obtain up-to-date meteorological data. The client application implements a realistic 3D visualization of the fire evolution on the landscape. A Level Of Detail (LOD) strategy contributes to improve the performance of the visualization system.

  5. Visual Spatial Attention Training Improve Spatial Attention and Motor Control for Unilateral Neglect Patients.

    PubMed

    Wang, Wei; Ji, Xiangtong; Ni, Jun; Ye, Qian; Zhang, Sicong; Chen, Wenli; Bian, Rong; Yu, Cui; Zhang, Wenting; Shen, Guangyu; Machado, Sergio; Yuan, Tifei; Shan, Chunlei

    2015-01-01

    To compare the effect of visual spatial training on the spatial attention to that on motor control and to correlate the improvement of spatial attention to motor control progress after visual spatial training in subjects with unilateral spatial neglect (USN). 9 cases with USN after right cerebral stroke were randomly divided into Conventional treatment group + visual spatial attention and Conventional treatment group. The Conventional treatment group + visual spatial attention received conventional rehabilitation therapy (physical and occupational therapy) and visual spatial attention training (optokinetic stimulation and right half-field eye patching). The Conventional treatment group was only treated with conventional rehabilitation training (physical and occupational therapy). All patients were assessed by behavioral inattention test (BIT), Fugl-Meyer Assessment of motor function (FMA), equilibrium coordination test (ECT) and non-equilibrium coordination test (NCT) before and after 4 weeks treatment. Total scores in both groups (without visual spatial attention/with visual spatial attention) improved significantly (BIT: P=0.021/P=0.000, d=1.667/d=2.116, power=0.69/power=0.98, 95%CI[-0.8839,45.88]/95%CI=[16.96,92.64]; FMA: P=0.002/P=0.000, d=2.521/d=2.700, power=0.93/power=0.98, 95%CI[5.707,30.79]/95%CI=[16.06,53.94]; ECT: P=0.002/ P=0.000, d=2.031/d=1.354, power=0.90/power=0.17, 95%CI[3.380,42.61]/95%CI=[-1.478,39.08]; NCT: P=0.013/P=0.000, d=1.124/d=1.822, power=0.41/power=0.56, 95%CI[-7.980,37.48]/95%CI=[4.798,43.60],) after treatment. Among the 2 groups, the group with visual spatial attention significantly improved in BIT (P=0.003, d=3.103, power=1, 95%CI[15.68,48.92]), FMA of upper extremity (P=0.006, d=2.771, power=1, 95%CI[5.061,20.14]) and NCT (P=0.010, d=2.214, power=0.81-0.90, 95%CI[3.018,15.88]). Correlative analysis shows that the change of BIT scores is positively correlated to the change of FMA total score (r=0.77, P<;0.01), FMA of upper extremity (r=0.81, P<0.01), NCT (r=0.78, P<0.01). Four weeks visual spatial training could improve spatial attention as well as motor control functions in hemineglect patients. The improvement of motor function is positively correlated to the progresses of visual spatial functions after visual spatial attention training.

  6. Superiority of a functional leukocyte adhesiveness/aggregation test over the white blood cell count to discriminate between mild and significant inflammatory response in patients with acute bacterial infections.

    PubMed

    Rogowski, Ori; Rotstein, Rivka; Zeltzer, David; Misgav, Sarit; Justo, Daniel; Avitzour, Daniel; Mardi, Tamar; Serov, Jacob; Arber, Nadir; Berliner, Shlomo; Shapira, Itzhak

    2002-01-01

    Electronic cell counters may underestimate the white blood cell count (WBCC) in the presence of aggregated leukocytes. In the present study we focused on the possibility of using a functional, as opposed to an anatomic, count to circumvent this eventual underestimation. A model of bacterial infection was used because of the importance of leukocytosis in the physician's clinical decision-making process. There were 35 patients with low C-reactive protein (CRP) concentrations (0.5-4.9 mg/dL), 45 with intermediate (5-9.9 mg/dL), and 120 with relatively high (>10 mg/dL) CRP concentrations. A significant (P=0.008) difference was noted between the state of leukocyte adhesiveness/aggregation in the peripheral blood of individuals with low CRP concentrations (3.5%+/-4.3%) and those with high CRP concentrations (7.4%+/-8%), while there was no significant difference in the respective number of WBCs per cubic millimeter (cmm) (11,600 +/- 5,500 and 14,000 +/- 7,200, respectively). We raise the possibility that a functional test might be superior over an anatomic count in patients with acute bacterial infection and a significant acute phase response. Copyright 2002 Wiley-Liss, Inc.

  7. High Performance Computing and Cutting-Edge Analysis Can Open New

    Science.gov Websites

    Realms March 1, 2018 Two people looking at a 3D interactive graphical data the Visualization Center in capabilities to visualize complex, 3D images of the wakes from multiple wind turbines so that we can better

  8. Relativistic compression and expansion of experiential time in the left and right space.

    PubMed

    Vicario, Carmelo Mario; Pecoraro, Patrizia; Turriziani, Patrizia; Koch, Giacomo; Caltagirone, Carlo; Oliveri, Massimiliano

    2008-03-05

    Time, space and numbers are closely linked in the physical world. However, the relativistic-like effects on time perception of spatial and magnitude factors remain poorly investigated. Here we wanted to investigate whether duration judgments of digit visual stimuli are biased depending on the side of space where the stimuli are presented and on the magnitude of the stimulus itself. Different groups of healthy subjects performed duration judgment tasks on various types of visual stimuli. In the first two experiments visual stimuli were constituted by digit pairs (1 and 9), presented in the centre of the screen or in the right and left space. In a third experiment visual stimuli were constituted by black circles. The duration of the reference stimulus was fixed at 300 ms. Subjects had to indicate the relative duration of the test stimulus compared with the reference one. The main results showed that, regardless of digit magnitude, duration of stimuli presented in the left hemispace is underestimated and that of stimuli presented in the right hemispace is overestimated. On the other hand, in midline position, duration judgments are affected by the numerical magnitude of the presented stimulus, with time underestimation of stimuli of low magnitude and time overestimation of stimuli of high magnitude. These results argue for the presence of strict interactions between space, time and magnitude representation on the human brain.

  9. Visually estimated ejection fraction by two dimensional and triplane echocardiography is closely correlated with quantitative ejection fraction by real-time three dimensional echocardiography

    PubMed Central

    Shahgaldi, Kambiz; Gudmundsson, Petri; Manouras, Aristomenis; Brodin, Lars-Åke; Winter, Reidar

    2009-01-01

    Background Visual assessment of left ventricular ejection fraction (LVEF) is often used in clinical routine despite general recommendations to use quantitative biplane Simpsons (BPS) measurements. Even thou quantitative methods are well validated and from many reasons preferable, the feasibility of visual assessment (eyeballing) is superior. There is to date only sparse data comparing visual EF assessment in comparison to quantitative methods available. The aim of this study was to compare visual EF assessment by two-dimensional echocardiography (2DE) and triplane echocardiography (TPE) using quantitative real-time three-dimensional echocardiography (RT3DE) as the reference method. Methods Thirty patients were enrolled in the study. Eyeballing EF was assessed using apical 4-and 2 chamber views and TP mode by two experienced readers blinded to all clinical data. The measurements were compared to quantitative RT3DE. Results There were an excellent correlation between eyeballing EF by 2D and TP vs 3DE (r = 0.91 and 0.95 respectively) without any significant bias (-0.5 ± 3.7% and -0.2 ± 2.9% respectively). Intraobserver variability was 3.8% for eyeballing 2DE, 3.2% for eyeballing TP and 2.3% for quantitative 3D-EF. Interobserver variability was 7.5% for eyeballing 2D and 8.4% for eyeballing TP. Conclusion Visual estimation of LVEF both using 2D and TP by an experienced reader correlates well with quantitative EF determined by RT3DE. There is an apparent trend towards a smaller variability using TP in comparison to 2D, this was however not statistically significant. PMID:19706183

  10. 3D Shape Perception in Posterior Cortical Atrophy: A Visual Neuroscience Perspective.

    PubMed

    Gillebert, Céline R; Schaeverbeke, Jolien; Bastin, Christine; Neyens, Veerle; Bruffaerts, Rose; De Weer, An-Sofie; Seghers, Alexandra; Sunaert, Stefan; Van Laere, Koen; Versijpt, Jan; Vandenbulcke, Mathieu; Salmon, Eric; Todd, James T; Orban, Guy A; Vandenberghe, Rik

    2015-09-16

    Posterior cortical atrophy (PCA) is a rare focal neurodegenerative syndrome characterized by progressive visuoperceptual and visuospatial deficits, most often due to atypical Alzheimer's disease (AD). We applied insights from basic visual neuroscience to analyze 3D shape perception in humans affected by PCA. Thirteen PCA patients and 30 matched healthy controls participated, together with two patient control groups with diffuse Lewy body dementia (DLBD) and an amnestic-dominant phenotype of AD, respectively. The hierarchical study design consisted of 3D shape processing for 4 cues (shading, motion, texture, and binocular disparity) with corresponding 2D and elementary feature extraction control conditions. PCA and DLBD exhibited severe 3D shape-processing deficits and AD to a lesser degree. In PCA, deficient 3D shape-from-shading was associated with volume loss in the right posterior inferior temporal cortex. This region coincided with a region of functional activation during 3D shape-from-shading in healthy controls. In PCA patients who performed the same fMRI paradigm, response amplitude during 3D shape-from-shading was reduced in this region. Gray matter volume in this region also correlated with 3D shape-from-shading in AD. 3D shape-from-disparity in PCA was associated with volume loss slightly more anteriorly in posterior inferior temporal cortex as well as in ventral premotor cortex. The findings in right posterior inferior temporal cortex and right premotor cortex are consistent with neurophysiologically based models of the functional anatomy of 3D shape processing. However, in DLBD, 3D shape deficits rely on mechanisms distinct from inferior temporal structural integrity. Posterior cortical atrophy (PCA) is a neurodegenerative syndrome characterized by progressive visuoperceptual dysfunction and most often an atypical presentation of Alzheimer's disease (AD) affecting the ventral and dorsal visual streams rather than the medial temporal system. We applied insights from fundamental visual neuroscience to analyze 3D shape perception in PCA. 3D shape-processing deficits were affected beyond what could be accounted for by lower-order processing deficits. For shading and disparity, this was related to volume loss in regions previously implicated in 3D shape processing in the intact human and nonhuman primate brain. Typical amnestic-dominant AD patients also exhibited 3D shape deficits. Advanced visual neuroscience provides insight into the pathogenesis of PCA that also bears relevance for vision in typical AD. Copyright © 2015 Gillebert, Schaeverbeke et al.

  11. Goal-Directed Grasping: The Dimensional Properties of an Object Influence the Nature of the Visual Information Mediating Aperture Shaping

    ERIC Educational Resources Information Center

    Holmes, Scott A.; Heath, Matthew

    2013-01-01

    An issue of continued debate in the visuomotor control literature surrounds whether a 2D object serves as a representative proxy for a 3D object in understanding the nature of the visual information supporting grasping control. In an effort to reconcile this issue, we examined the extent to which aperture profiles for grasping 2D and 3D objects…

  12. The viewpoint-specific failure of modern 3D displays in laparoscopic surgery.

    PubMed

    Sakata, Shinichiro; Grove, Philip M; Hill, Andrew; Watson, Marcus O; Stevenson, Andrew R L

    2016-11-01

    Surgeons conventionally assume the optimal viewing position during 3D laparoscopic surgery and may not be aware of the potential hazards to team members positioned across different suboptimal viewing positions. The first aim of this study was to map the viewing positions within a standard operating theatre where individuals may experience visual ghosting (i.e. double vision images) from crosstalk. The second aim was to characterize the standard viewing positions adopted by instrument nurses and surgical assistants during laparoscopic pelvic surgery and report the associated levels of visual ghosting and discomfort. In experiment 1, 15 participants viewed a laparoscopic 3D display from 176 different viewing positions around the screen. In experiment 2, 12 participants (randomly assigned to four clinically relevant viewing positions) viewed laparoscopic suturing in a simulation laboratory. In both experiments, we measured the intensity of visual ghosting. In experiment 2, participants also completed the Simulator Sickness Questionnaire. We mapped locations within the dimensions of a standard operating theatre at which visual ghosting may result during 3D laparoscopy. Head height relative to the bottom of the image and large horizontal eccentricities away from the surface normal were important contributors to high levels of visual ghosting. Conventional viewing positions adopted by instrument nurses yielded high levels of visual ghosting and severe discomfort. The conventional viewing positions adopted by surgical team members during laparoscopic pelvic operations are suboptimal for viewing 3D laparoscopic displays, and even short periods of viewing can yield high levels of discomfort.

  13. The advantage of CT scans and 3D visualizations in the analysis of three child mummies from the Graeco-Roman Period.

    PubMed

    Villa, Chiara; Davey, Janet; Craig, Pamela J G; Drummer, Olaf H; Lynnerup, Niels

    2015-01-01

    Three child mummies from the Graeco-Roman Period (332 BCE - c. 395 CE) were examined using CT scans and 3D visualizations generated with Vitrea 2 and MIMICS graphic workstations with the aim of comparing the results with previous X-ray examinations performed by Dawson and Gray in 1968. Although the previous analyses reported that the children had been excerebrated and eviscerated, no evidence of incisions or breaches of the cranial cavity were found; 3D visualizations were generated showing the brain and the internal organs to be in situ. A larger number of skeletal post-mortem damages were identified, such as dislocation of mandible, ribs, and vertebrae, probably suffered at the time of embalming procedure. Different radio-opaque granular particles were observed throughout bodies (internally and externally) and could be explained as presence of natron, used as external desiccating agent by the embalmers, or as adipocerous alteration, a natural alteration of body fat. Age-at-death was estimated using the 3D visualization of the teeth, the state of fusion of the vertebrae and the presence of the secondary centers of the long bones: two mummies died at the age of 4 years ± 12 months, the third one at the age of 6 years ± 24 months. Hyperdontia or polydontia, a dental anomaly, could also be identified in one child using 3D visualizations of the teeth: two supernumerary teeth were found behind the maxillary permanent central incisors which had not been noticed in the Dawson and Gray's X-ray analysis. In conclusion, CT-scan investigations and especially 3D visualizations are important tools in the non-invasive analysis of the mummies and, in this case, provided revised and additional information compared to the only X-ray examination.

  14. Web-based three-dimensional geo-referenced visualization

    NASA Astrophysics Data System (ADS)

    Lin, Hui; Gong, Jianhua; Wang, Freeman

    1999-12-01

    This paper addresses several approaches to implementing web-based, three-dimensional (3-D), geo-referenced visualization. The discussion focuses on the relationship between multi-dimensional data sets and applications, as well as the thick/thin client and heavy/light server structure. Two models of data sets are addressed in this paper. One is the use of traditional 3-D data format such as 3-D Studio Max, Open Inventor 2.0, Vis5D and OBJ. The other is modelled by a web-based language such as VRML. Also, traditional languages such as C and C++, as well as web-based programming tools such as Java, Java3D and ActiveX, can be used for developing applications. The strengths and weaknesses of each approach are elaborated. Four practical solutions for using VRML and Java, Java and Java3D, VRML and ActiveX and Java wrapper classes (Java and C/C++), to develop applications are presented for web-based, real-time interactive and explorative visualization.

  15. Systematic Parameterization, Storage, and Representation of Volumetric DICOM Data.

    PubMed

    Fischer, Felix; Selver, M Alper; Gezer, Sinem; Dicle, Oğuz; Hillen, Walter

    Tomographic medical imaging systems produce hundreds to thousands of slices, enabling three-dimensional (3D) analysis. Radiologists process these images through various tools and techniques in order to generate 3D renderings for various applications, such as surgical planning, medical education, and volumetric measurements. To save and store these visualizations, current systems use snapshots or video exporting, which prevents further optimizations and requires the storage of significant additional data. The Grayscale Softcopy Presentation State extension of the Digital Imaging and Communications in Medicine (DICOM) standard resolves this issue for two-dimensional (2D) data by introducing an extensive set of parameters, namely 2D Presentation States (2DPR), that describe how an image should be displayed. 2DPR allows storing these parameters instead of storing parameter applied images, which cause unnecessary duplication of the image data. Since there is currently no corresponding extension for 3D data, in this study, a DICOM-compliant object called 3D presentation states (3DPR) is proposed for the parameterization and storage of 3D medical volumes. To accomplish this, the 3D medical visualization process is divided into four tasks, namely pre-processing, segmentation, post-processing, and rendering. The important parameters of each task are determined. Special focus is given to the compression of segmented data, parameterization of the rendering process, and DICOM-compliant implementation of the 3DPR object. The use of 3DPR was tested in a radiology department on three clinical cases, which require multiple segmentations and visualizations during the workflow of radiologists. The results show that 3DPR can effectively simplify the workload of physicians by directly regenerating 3D renderings without repeating intermediate tasks, increase efficiency by preserving all user interactions, and provide efficient storage as well as transfer of visualized data.

  16. 3D Visualization of Global Ocean Circulation

    NASA Astrophysics Data System (ADS)

    Nelson, V. G.; Sharma, R.; Zhang, E.; Schmittner, A.; Jenny, B.

    2015-12-01

    Advanced 3D visualization techniques are seldom used to explore the dynamic behavior of ocean circulation. Streamlines are an effective method for visualization of flow, and they can be designed to clearly show the dynamic behavior of a fluidic system. We employ vector field editing and extraction software to examine the topology of velocity vector fields generated by a 3D global circulation model coupled to a one-layer atmosphere model simulating preindustrial and last glacial maximum (LGM) conditions. This results in a streamline-based visualization along multiple density isosurfaces on which we visualize points of vertical exchange and the distribution of properties such as temperature and biogeochemical tracers. Previous work involving this model examined the change in the energetics driving overturning circulation and mixing between simulations of LGM and preindustrial conditions. This visualization elucidates the relationship between locations of vertical exchange and mixing, as well as demonstrates the effects of circulation and mixing on the distribution of tracers such as carbon isotopes.

  17. MT3DMS: A Modular Three-Dimensional Multispecies Transport Model for Simulation of Advection, Dispersion, and Chemical Reactions of Contaminants in Groundwater Systems; Documentation and User’s Guide

    DTIC Science & Technology

    1999-12-01

    addition, the data files saved in the POINT format can include an optional header which is compatible with Amtec Engineering’s 2-D and 3-D visualization...34.DAT" file so that the file can be used directly by Amtec Engineering’s 2-D and 3-D visualization package Tecplot©. The ARRAY and POINT formats are

  18. OnSight: Multi-platform Visualization of the Surface of Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.

    2017-12-01

    A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.

  19. Effect of astigmatism on visual acuity in eyes with a diffractive multifocal intraocular lens.

    PubMed

    Hayashi, Ken; Manabe, Shin-Ichi; Yoshida, Motoaki; Hayashi, Hideyuki

    2010-08-01

    To examine the effect of astigmatism on visual acuity at various distances in eyes with a diffractive multifocal intraocular lens (IOL). Hayashi Eye Hospital, Fukuoka, Japan. In this study, eyes had implantation of a diffractive multifocal IOL with a +3.00 diopter (D) addition (add) (AcrySof ReSTOR SN6AD1), a diffractive multifocal IOL with a +4.00 D add (AcrySof ReSTOR SN6AD3), or a monofocal IOL (AcrySof SN60WF). Astigmatism was simulated by adding cylindrical lenses of various diopters (0.00, 0.50, 1.00, 1.50, 2.00), after which distance-corrected acuity was measured at various distances. At most distances, the mean visual acuity in the multifocal IOL groups decreased in proportion to the added astigmatism. With astigmatism of 0.00 D and 0.50 D, distance-corrected near visual acuity (DCNVA) in the +4.00 D group and distance-corrected intermediate visual acuity (DCIVA) and DCNVA in the +3.00 D group were significantly better than in the monofocal group; the corrected distance visual acuity (CDVA) was similar. The DCNVA with astigmatism of 1.00 D was better in 2 multifocal groups; however, with astigmatism of 1.50 D and 2.00 D, the CDVA and DCIVA at 0.5m in the multifocal groups were significantly worse than in the monofocal group, although the DCNVA was similar. With astigmatism of 1.00 D or greater, the mean CDVA and DCNVA in the multifocal groups reached useful levels (20/40). The presence of astigmatism in eyes with a diffractive multifocal IOL compromised all distance visual acuities, suggesting the need to correct astigmatism of greater than 1.00 D. No author has a financial or proprietary interest in any material or method mentioned. Copyright 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  20. Visualizing SPH Cataclysmic Variable Accretion Disk Simulations with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Wood, Matthew A.

    2015-01-01

    We present innovative ways to use Blender, a 3D graphics package, to visualize smoothed particle hydrodynamics particle data of cataclysmic variable accretion disks. We focus on the methods of shape key data constructs to increasedata i/o and manipulation speed. The implementation of the methods outlined allow for compositing of the various visualization layers into a final animation. The viewing of the disk in 3D from different angles can allow for a visual analysisof the physical system and orbits. The techniques have a wide ranging set of applications in astronomical visualization,including both observation and theoretical data.

  1. Telerobotic Haptic Exploration in Art Galleries and Museums for Individuals with Visual Impairments.

    PubMed

    Park, Chung Hyuk; Ryu, Eun-Seok; Howard, Ayanna M

    2015-01-01

    This paper presents a haptic telepresence system that enables visually impaired users to explore locations with rich visual observation such as art galleries and museums by using a telepresence robot, a RGB-D sensor (color and depth camera), and a haptic interface. The recent improvement on RGB-D sensors has enabled real-time access to 3D spatial information in the form of point clouds. However, the real-time representation of this data in the form of tangible haptic experience has not been challenged enough, especially in the case of telepresence for individuals with visual impairments. Thus, the proposed system addresses the real-time haptic exploration of remote 3D information through video encoding and real-time 3D haptic rendering of the remote real-world environment. This paper investigates two scenarios in haptic telepresence, i.e., mobile navigation and object exploration in a remote environment. Participants with and without visual impairments participated in our experiments based on the two scenarios, and the system performance was validated. In conclusion, the proposed framework provides a new methodology of haptic telepresence for individuals with visual impairments by providing an enhanced interactive experience where they can remotely access public places (art galleries and museums) with the aid of haptic modality and robotic telepresence.

  2. Solar System Visualization (SSV) Project

    NASA Technical Reports Server (NTRS)

    Todd, Jessida L.

    2005-01-01

    The Solar System Visualization (SSV) project aims at enhancing scientific and public understanding through visual representations and modeling procedures. The SSV project's objectives are to (1) create new visualization technologies, (2) organize science observations and models, and (3) visualize science results and mission Plans. The SSV project currently supports the Mars Exploration Rovers (MER) mission, the Mars Reconnaissance Orbiter (MRO), and Cassini. In support of the these missions, the SSV team has produced pan and zoom animations of large mosaics to reveal details of surface features and topography, created 3D animations of science instruments and procedures, formed 3-D anaglyphs from left and right stereo pairs, and animated registered multi-resolution mosaics to provide context for microscopic images.

  3. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning.

    PubMed

    Gee, Carole T

    2013-11-01

    As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction.

  4. MRI segmentation by active contours model, 3D reconstruction, and visualization

    NASA Astrophysics Data System (ADS)

    Lopez-Hernandez, Juan M.; Velasquez-Aguilar, J. Guadalupe

    2005-02-01

    The advances in 3D data modelling methods are becoming increasingly popular in the areas of biology, chemistry and medical applications. The Nuclear Magnetic Resonance Imaging (NMRI) technique has progressed at a spectacular rate over the past few years, its uses have been spread over many applications throughout the body in both anatomical and functional investigations. In this paper we present the application of Zernike polynomials for 3D mesh model of the head using the contour acquired of cross-sectional slices by active contour model extraction and we propose the visualization with OpenGL 3D Graphics of the 2D-3D (slice-surface) information for the diagnostic aid in medical applications.

  5. SpreaD3: Interactive Visualization of Spatiotemporal History and Trait Evolutionary Processes.

    PubMed

    Bielejec, Filip; Baele, Guy; Vrancken, Bram; Suchard, Marc A; Rambaut, Andrew; Lemey, Philippe

    2016-08-01

    Model-based phylogenetic reconstructions increasingly consider spatial or phenotypic traits in conjunction with sequence data to study evolutionary processes. Alongside parameter estimation, visualization of ancestral reconstructions represents an integral part of these analyses. Here, we present a complete overhaul of the spatial phylogenetic reconstruction of evolutionary dynamics software, now called SpreaD3 to emphasize the use of data-driven documents, as an analysis and visualization package that primarily complements Bayesian inference in BEAST (http://beast.bio.ed.ac.uk, last accessed 9 May 2016). The integration of JavaScript D3 libraries (www.d3.org, last accessed 9 May 2016) offers novel interactive web-based visualization capacities that are not restricted to spatial traits and extend to any discrete or continuously valued trait for any organism of interest. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. [Computer-assisted operational planning for pediatric abdominal surgery. 3D-visualized MRI with volume rendering].

    PubMed

    Günther, P; Tröger, J; Holland-Cunz, S; Waag, K L; Schenk, J P

    2006-08-01

    Exact surgical planning is necessary for complex operations of pathological changes in anatomical structures of the pediatric abdomen. 3D visualization and computer-assisted operational planning based on CT data are being increasingly used for difficult operations in adults. To minimize radiation exposure and for better soft tissue contrast, sonography and MRI are the preferred diagnostic methods in pediatric patients. Because of manifold difficulties 3D visualization of these MRI data has not been realized so far, even though the field of embryonal malformations and tumors could benefit from this.A newly developed and modified raycasting-based powerful 3D volume rendering software (VG Studio Max 1.2) for the planning of pediatric abdominal surgery is presented. With the help of specifically developed algorithms, a useful surgical planning system is demonstrated. Thanks to the easy handling and high-quality visualization with enormous gain of information, the presented system is now an established part of routine surgical planning.

  7. Data Visualization for ESM and ELINT: Visualizing 3D and Hyper Dimensional Data

    DTIC Science & Technology

    2011-06-01

    technique to present multiple 2D views was devised by D. Asimov . He assembled multiple two dimensional scatter plot views of the hyper dimensional...Viewing Multidimensional Data”, D. Asimov , DIAM Journal on Scientific and Statistical Computing, vol.61, pp.128-143, 1985. [2] “High-Dimensional

  8. 4-D drive-through visualization of I-280 for review of proposed signing.

    DOT National Transportation Integrated Search

    1998-10-01

    The primary objective of this work was to produce a simulated 4D drive-through of a portion of highway(I-280 through : Newark, NJ) for which proposed traffic-generator signing had to be reviewed. A 4D visualization was produced that combined 3D : geo...

  9. 3D gaze tracking system for NVidia 3D Vision®.

    PubMed

    Wibirama, Sunu; Hamamoto, Kazuhiko

    2013-01-01

    Inappropriate parallax setting in stereoscopic content generally causes visual fatigue and visual discomfort. To optimize three dimensional (3D) effects in stereoscopic content by taking into account health issue, understanding how user gazes at 3D direction in virtual space is currently an important research topic. In this paper, we report the study of developing a novel 3D gaze tracking system for Nvidia 3D Vision(®) to be used in desktop stereoscopic display. We suggest an optimized geometric method to accurately measure the position of virtual 3D object. Our experimental result shows that the proposed system achieved better accuracy compared to conventional geometric method by average errors 0.83 cm, 0.87 cm, and 1.06 cm in X, Y, and Z dimensions, respectively.

  10. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity

    PubMed Central

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available. PMID:28912739

  11. Contextual Cueing Effect in Spatial Layout Defined by Binocular Disparity.

    PubMed

    Zhao, Guang; Zhuang, Qian; Ma, Jie; Tu, Shen; Liu, Qiang; Sun, Hong-Jin

    2017-01-01

    Repeated visual context induces higher search efficiency, revealing a contextual cueing effect, which depends on the association between the target and its visual context. In this study, participants performed a visual search task where search items were presented with depth information defined by binocular disparity. When the 3-dimensional (3D) configurations were repeated over blocks, the contextual cueing effect was obtained (Experiment 1). When depth information was in chaos over repeated configurations, visual search was not facilitated and the contextual cueing effect largely crippled (Experiment 2). However, when we made the search items within a tiny random displacement in the 2-dimentional (2D) plane but maintained the depth information constant, the contextual cueing was preserved (Experiment 3). We concluded that the contextual cueing effect was robust in the context provided by 3D space with stereoscopic information, and more importantly, the visual system prioritized stereoscopic information in learning of spatial information when depth information was available.

  12. Specialized Computer Systems for Environment Visualization

    NASA Astrophysics Data System (ADS)

    Al-Oraiqat, Anas M.; Bashkov, Evgeniy A.; Zori, Sergii A.

    2018-06-01

    The need for real time image generation of landscapes arises in various fields as part of tasks solved by virtual and augmented reality systems, as well as geographic information systems. Such systems provide opportunities for collecting, storing, analyzing and graphically visualizing geographic data. Algorithmic and hardware software tools for increasing the realism and efficiency of the environment visualization in 3D visualization systems are proposed. This paper discusses a modified path tracing algorithm with a two-level hierarchy of bounding volumes and finding intersections with Axis-Aligned Bounding Box. The proposed algorithm eliminates the branching and hence makes the algorithm more suitable to be implemented on the multi-threaded CPU and GPU. A modified ROAM algorithm is used to solve the qualitative visualization of reliefs' problems and landscapes. The algorithm is implemented on parallel systems—cluster and Compute Unified Device Architecture-networks. Results show that the implementation on MPI clusters is more efficient than Graphics Processing Unit/Graphics Processing Clusters and allows real-time synthesis. The organization and algorithms of the parallel GPU system for the 3D pseudo stereo image/video synthesis are proposed. With realizing possibility analysis on a parallel GPU-architecture of each stage, 3D pseudo stereo synthesis is performed. An experimental prototype of a specialized hardware-software system 3D pseudo stereo imaging and video was developed on the CPU/GPU. The experimental results show that the proposed adaptation of 3D pseudo stereo imaging to the architecture of GPU-systems is efficient. Also it accelerates the computational procedures of 3D pseudo-stereo synthesis for the anaglyph and anamorphic formats of the 3D stereo frame without performing optimization procedures. The acceleration is on average 11 and 54 times for test GPUs.

  13. A client–server framework for 3D remote visualization of radiotherapy treatment space

    PubMed Central

    Santhanam, Anand P.; Min, Yugang; Dou, Tai H.; Kupelian, Patrick; Low, Daniel A.

    2013-01-01

    Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments. PMID:23440605

  14. Effective 3-D shape discrimination survives retinal blur.

    PubMed

    Norman, J Farley; Beers, Amanda M; Holmin, Jessica S; Boswell, Alexandria M

    2010-08-01

    A single experiment evaluated observers' ability to visually discriminate 3-D object shape, where the 3-D structure was defined by motion, texture, Lambertian shading, and occluding contours. The observers' vision was degraded to varying degrees by blurring the experimental stimuli, using 2.0-, 2.5-, and 3.0-diopter convex lenses. The lenses reduced the observers' acuity from -0.091 LogMAR (in the no-blur conditions) to 0.924 LogMAR (in the conditions with the most blur; 3.0-diopter lenses). This visual degradation, although producing severe reductions in visual acuity, had only small (but significant) effects on the observers' ability to discriminate 3-D shape. The observers' shape discrimination performance was facilitated by the objects' rotation in depth, regardless of the presence or absence of blur. Our results indicate that accurate global shape discrimination survives a considerable amount of retinal blur.

  15. Java 3D Interactive Visualization for Astrophysics

    NASA Astrophysics Data System (ADS)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  16. Three-dimensional visualization of the craniofacial patient: volume segmentation, data integration and animation.

    PubMed

    Enciso, R; Memon, A; Mah, J

    2003-01-01

    The research goal at the Craniofacial Virtual Reality Laboratory of the School of Dentistry in conjunction with the Integrated Media Systems Center, School of Engineering, University of Southern California, is to develop computer methods to accurately visualize patients in three dimensions using advanced imaging and data acquisition devices such as cone-beam computerized tomography (CT) and mandibular motion capture. Data from these devices were integrated for three-dimensional (3D) patient-specific visualization, modeling and animation. Generic methods are in development that can be used with common CT image format (DICOM), mesh format (STL) and motion data (3D position over time). This paper presents preliminary descriptive studies on: 1) segmentation of the lower and upper jaws with two types of CT data--(a) traditional whole head CT data and (b) the new dental Newtom CT; 2) manual integration of accurate 3D tooth crowns with the segmented lower jaw 3D model; 3) realistic patient-specific 3D animation of the lower jaw.

  17. Data Cube Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.; Gárate, Matías

    2017-06-01

    With the increasing data acquisition rates from observational and computational astrophysics, new tools are needed to study and visualize data. We present a methodology for rendering 3D data cubes using the open-source 3D software Blender. By importing processed observations and numerical simulations through the Voxel Data format, we are able use the Blender interface and Python API to create high-resolution animated visualizations. We review the methods for data import, animation, and camera movement, and present examples of this methodology. The 3D rendering of data cubes gives scientists the ability to create appealing displays that can be used for both scientific presentations as well as public outreach.

  18. The OpenEarth Framework (OEF) for the 3D Visualization of Integrated Earth Science Data

    NASA Astrophysics Data System (ADS)

    Nadeau, David; Moreland, John; Baru, Chaitan; Crosby, Chris

    2010-05-01

    Data integration is increasingly important as we strive to combine data from disparate sources and assemble better models of the complex processes operating at the Earth's surface and within its interior. These data are often large, multi-dimensional, and subject to differing conventions for data structures, file formats, coordinate spaces, and units of measure. When visualized, these data require differing, and sometimes conflicting, conventions for visual representations, dimensionality, symbology, and interaction. All of this makes the visualization of integrated Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data integration and visualization suite of applications and libraries being developed by the GEON project at the University of California, San Diego, USA. Funded by the NSF, the project is leveraging virtual globe technology from NASA's WorldWind to create interactive 3D visualization tools that combine and layer data from a wide variety of sources to create a holistic view of features at, above, and beneath the Earth's surface. The OEF architecture is open, cross-platform, modular, and based upon Java. The OEF's modular approach to software architecture yields an array of mix-and-match software components for assembling custom applications. Available modules support file format handling, web service communications, data management, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats used in the field. Each one imports data into a general-purpose common data model supporting multidimensional regular and irregular grids, topography, feature geometry, and more. Data within these data models may be manipulated, combined, reprojected, and visualized. The OEF's visualization features support a variety of conventional and new visualization techniques for looking at topography, tomography, point clouds, imagery, maps, and feature geometry. 3D data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery, maps, and data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers within a common 3D coordinate space. Data management within the OEF handles and hides the inevitable quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Heuristics are used to extract necessary metadata used to guide data and visual operations. Derived data representations are computed to better support fluid interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization makes use of 3D graphics hardware support found on today's computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  19. Multiplexing in the primate motion pathway.

    PubMed

    Huk, Alexander C

    2012-06-01

    This article begins by reviewing recent work on 3D motion processing in the primate visual system. Some of these results suggest that 3D motion signals may be processed in the same circuitry already known to compute 2D motion signals. Such "multiplexing" has implications for the study of visual cortical circuits and neural signals. A more explicit appreciation of multiplexing--and the computations required for demultiplexing--may enrich the study of the visual system by emphasizing the importance of a structured and balanced "encoding/decoding" framework. In addition to providing a fresh perspective on how successive stages of visual processing might be approached, multiplexing also raises caveats about the value of "neural correlates" for understanding neural computation.

  20. Advanced in Visualization of 3D Time-Dependent CFD Solutions

    NASA Technical Reports Server (NTRS)

    Lane, David A.; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Numerical simulations of complex 3D time-dependent (unsteady) flows are becoming increasingly feasible because of the progress in computing systems. Unfortunately, many existing flow visualization systems were developed for time-independent (steady) solutions and do not adequately depict solutions from unsteady flow simulations. Furthermore, most systems only handle one time step of the solutions individually and do not consider the time-dependent nature of the solutions. For example, instantaneous streamlines are computed by tracking the particles using one time step of the solution. However, for streaklines and timelines, particles need to be tracked through all time steps. Streaklines can reveal quite different information about the flow than those revealed by instantaneous streamlines. Comparisons of instantaneous streamlines with dynamic streaklines are shown. For a complex 3D flow simulation, it is common to generate a grid system with several millions of grid points and to have tens of thousands of time steps. The disk requirement for storing the flow data can easily be tens of gigabytes. Visualizing solutions of this magnitude is a challenging problem with today's computer hardware technology. Even interactive visualization of one time step of the flow data can be a problem for some existing flow visualization systems because of the size of the grid. Current approaches for visualizing complex 3D time-dependent CFD solutions are described. The flow visualization system developed at NASA Ames Research Center to compute time-dependent particle traces from unsteady CFD solutions is described. The system computes particle traces (streaklines) by integrating through the time steps. This system has been used by several NASA scientists to visualize their CFD time-dependent solutions. The flow visualization capabilities of this system are described, and visualization results are shown.

  1. CTViz: A tool for the visualization of transport in nanocomposites.

    PubMed

    Beach, Benjamin; Brown, Joshua; Tarlton, Taylor; Derosa, Pedro A

    2016-05-01

    A visualization tool (CTViz) for charge transport processes in 3-D hybrid materials (nanocomposites) was developed, inspired by the need for a graphical application to assist in code debugging and data presentation of an existing in-house code. As the simulation code grew, troubleshooting problems grew increasingly difficult without an effective way to visualize 3-D samples and charge transport in those samples. CTViz is able to produce publication and presentation quality visuals of the simulation box, as well as static and animated visuals of the paths of individual carriers through the sample. CTViz was designed to provide a high degree of flexibility in the visualization of the data. A feature that characterizes this tool is the use of shade and transparency levels to highlight important details in the morphology or in the transport paths by hiding or dimming elements of little relevance to the current view. This is fundamental for the visualization of 3-D systems with complex structures. The code presented here provides these required capabilities, but has gone beyond the original design and could be used as is or easily adapted for the visualization of other particulate transport where transport occurs on discrete paths. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Two-dimensional simulation of eccentric photorefraction images for ametropes: factors influencing the measurement.

    PubMed

    Wu, Yifei; Thibos, Larry N; Candy, T Rowan

    2018-05-07

    Eccentric photorefraction and Purkinje image tracking are used to estimate refractive state and eye position simultaneously. Beyond vision screening, they provide insight into typical and atypical visual development. Systematic analysis of the effect of refractive error and spectacles on photorefraction data is needed to gauge the accuracy and precision of the technique. Simulation of two-dimensional, double-pass eccentric photorefraction was performed (Zemax). The inward pass included appropriate light sources, lenses and a single surface pupil plane eye model to create an extended retinal image that served as the source for the outward pass. Refractive state, as computed from the luminance gradient in the image of the pupil captured by the model's camera, was evaluated for a range of refractive errors (-15D to +15D), pupil sizes (3 mm to 7 mm) and two sets of higher-order monochromatic aberrations. Instrument calibration was simulated using -8D to +8D trial lenses at the spectacle plane for: (1) vertex distances from 3 mm to 23 mm, (2) uncorrected and corrected hyperopic refractive errors of +4D and +7D, and (3) uncorrected and corrected astigmatism of 4D at four different axes. Empirical calibration of a commercial photorefractor was also compared with a wavefront aberrometer for human eyes. The pupil luminance gradient varied linearly with refractive state for defocus less than approximately 4D (5 mm pupil). For larger errors, the gradient magnitude saturated and then reduced, leading to under-estimation of refractive state. Additional inaccuracy (up to 1D for 8D of defocus) resulted from spectacle magnification in the pupil image, which would reduce precision in situations where vertex distance is variable. The empirical calibration revealed a constant offset between the two clinical instruments. Computational modelling demonstrates the principles and limitations of photorefraction to help users avoid potential measurement errors. Factors that could cause clinically significant errors in photorefraction estimates include high refractive error, vertex distance and magnification effects of a spectacle lens, increased higher-order monochromatic aberrations, and changes in primary spherical aberration with accommodation. The impact of these errors increases with increasing defocus. © 2018 The Authors Ophthalmic & Physiological Optics © 2018 The College of Optometrists.

  3. Virtual reality and 3D animation in forensic visualization.

    PubMed

    Ma, Minhua; Zheng, Huiru; Lallie, Harjinder

    2010-09-01

    Computer-generated three-dimensional (3D) animation is an ideal media to accurately visualize crime or accident scenes to the viewers and in the courtrooms. Based upon factual data, forensic animations can reproduce the scene and demonstrate the activity at various points in time. The use of computer animation techniques to reconstruct crime scenes is beginning to replace the traditional illustrations, photographs, and verbal descriptions, and is becoming popular in today's forensics. This article integrates work in the areas of 3D graphics, computer vision, motion tracking, natural language processing, and forensic computing, to investigate the state-of-the-art in forensic visualization. It identifies and reviews areas where new applications of 3D digital technologies and artificial intelligence could be used to enhance particular phases of forensic visualization to create 3D models and animations automatically and quickly. Having discussed the relationships between major crime types and level-of-detail in corresponding forensic animations, we recognized that high level-of-detail animation involving human characters, which is appropriate for many major crime types but has had limited use in courtrooms, could be useful for crime investigation. © 2010 American Academy of Forensic Sciences.

  4. Applications of 3D visualization : peer exchange summary report : Raleigh, North Carolina July 8-9, 2009

    DOT National Transportation Integrated Search

    2009-11-01

    This report provides a summary of a 1.5-day peer exchange held in July 2009 focusing on select transportation agencies' applications of 3D visualization techniques. FHWA's Office of Interstate and Border Planning sponsored the peer exchange.

  5. Distributed GPU Computing in GIScience

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Yang, C.; Huang, Q.; Li, J.; Sun, M.

    2013-12-01

    Geoscientists strived to discover potential principles and patterns hidden inside ever-growing Big Data for scientific discoveries. To better achieve this objective, more capable computing resources are required to process, analyze and visualize Big Data (Ferreira et al., 2003; Li et al., 2013). Current CPU-based computing techniques cannot promptly meet the computing challenges caused by increasing amount of datasets from different domains, such as social media, earth observation, environmental sensing (Li et al., 2013). Meanwhile CPU-based computing resources structured as cluster or supercomputer is costly. In the past several years with GPU-based technology matured in both the capability and performance, GPU-based computing has emerged as a new computing paradigm. Compare to traditional computing microprocessor, the modern GPU, as a compelling alternative microprocessor, has outstanding high parallel processing capability with cost-effectiveness and efficiency(Owens et al., 2008), although it is initially designed for graphical rendering in visualization pipe. This presentation reports a distributed GPU computing framework for integrating GPU-based computing within distributed environment. Within this framework, 1) for each single computer, computing resources of both GPU-based and CPU-based can be fully utilized to improve the performance of visualizing and processing Big Data; 2) within a network environment, a variety of computers can be used to build up a virtual super computer to support CPU-based and GPU-based computing in distributed computing environment; 3) GPUs, as a specific graphic targeted device, are used to greatly improve the rendering efficiency in distributed geo-visualization, especially for 3D/4D visualization. Key words: Geovisualization, GIScience, Spatiotemporal Studies Reference : 1. Ferreira de Oliveira, M. C., & Levkowitz, H. (2003). From visual data exploration to visual data mining: A survey. Visualization and Computer Graphics, IEEE Transactions on, 9(3), 378-394. 2. Li, J., Jiang, Y., Yang, C., Huang, Q., & Rice, M. (2013). Visualizing 3D/4D Environmental Data Using Many-core Graphics Processing Units (GPUs) and Multi-core Central Processing Units (CPUs). Computers & Geosciences, 59(9), 78-89. 3. Owens, J. D., Houston, M., Luebke, D., Green, S., Stone, J. E., & Phillips, J. C. (2008). GPU computing. Proceedings of the IEEE, 96(5), 879-899.

  6. How 3D immersive visualization is changing medical diagnostics

    NASA Astrophysics Data System (ADS)

    Koning, Anton H. J.

    2011-03-01

    Originally the only way to look inside the human body without opening it up was by means of two dimensional (2D) images obtained using X-ray equipment. The fact that human anatomy is inherently three dimensional leads to ambiguities in interpretation and problems of occlusion. Three dimensional (3D) imaging modalities such as CT, MRI and 3D ultrasound remove these drawbacks and are now part of routine medical care. While most hospitals 'have gone digital', meaning that the images are no longer printed on film, they are still being viewed on 2D screens. However, this way valuable depth information is lost, and some interactions become unnecessarily complex or even unfeasible. Using a virtual reality (VR) system to present volumetric data means that depth information is presented to the viewer and 3D interaction is made possible. At the Erasmus MC we have developed V-Scope, an immersive volume visualization system for visualizing a variety of (bio-)medical volumetric datasets, ranging from 3D ultrasound, via CT and MRI, to confocal microscopy, OPT and 3D electron-microscopy data. In this talk we will address the advantages of such a system for both medical diagnostics as well as for (bio)medical research.

  7. A Web-based Visualization System for Three Dimensional Geological Model using Open GIS

    NASA Astrophysics Data System (ADS)

    Nemoto, T.; Masumoto, S.; Nonogaki, S.

    2017-12-01

    A three dimensional geological model is an important information in various fields such as environmental assessment, urban planning, resource development, waste management and disaster mitigation. In this study, we have developed a web-based visualization system for 3D geological model using free and open source software. The system has been successfully implemented by integrating web mapping engine MapServer and geographic information system GRASS. MapServer plays a role of mapping horizontal cross sections of 3D geological model and a topographic map. GRASS provides the core components for management, analysis and image processing of the geological model. Online access to GRASS functions has been enabled using PyWPS that is an implementation of WPS (Web Processing Service) Open Geospatial Consortium (OGC) standard. The system has two main functions. Two dimensional visualization function allows users to generate horizontal and vertical cross sections of 3D geological model. These images are delivered via WMS (Web Map Service) and WPS OGC standards. Horizontal cross sections are overlaid on the topographic map. A vertical cross section is generated by clicking a start point and an end point on the map. Three dimensional visualization function allows users to visualize geological boundary surfaces and a panel diagram. The user can visualize them from various angles by mouse operation. WebGL is utilized for 3D visualization. WebGL is a web technology that brings hardware-accelerated 3D graphics to the browser without installing additional software. The geological boundary surfaces can be downloaded to incorporate the geologic structure in a design on CAD and model for various simulations. This study was supported by JSPS KAKENHI Grant Number JP16K00158.

  8. Mental practice with interactive 3D visual aids enhances surgical performance.

    PubMed

    Yiasemidou, Marina; Glassman, Daniel; Mushtaq, Faisal; Athanasiou, Christos; Williams, Mark-Mon; Jayne, David; Miskovic, Danilo

    2017-10-01

    Evidence suggests that Mental Practice (MP) could be used to finesse surgical skills. However, MP is cognitively demanding and may be dependent on the ability of individuals to produce mental images. In this study, we hypothesised that the provision of interactive 3D visual aids during MP could facilitate surgical skill performance. 20 surgical trainees were case-matched to one of three different preparation methods prior to performing a simulated Laparoscopic Cholecystectomy (LC). Two intervention groups underwent a 25-minute MP session; one with interactive 3D visual aids depicting the relevant surgical anatomy (3D-MP group, n = 5) and one without (MP-Only, n = 5). A control group (n = 10) watched a didactic video of a real LC. Scores relating to technical performance and safety were recorded by a surgical simulator. The Control group took longer to complete the procedure relative to the 3D&MP condition (p = .002). The number of movements was also statistically different across groups (p = .001), with the 3D&MP group making fewer movements relative to controls (p = .001). Likewise, the control group moved further in comparison to the 3D&MP condition and the MP-Only condition (p = .004). No reliable differences were observed for safety metrics. These data provide evidence for the potential value of MP in improving performance. Furthermore, they suggest that 3D interactive visual aids during MP could potentially enhance performance, beyond the benefits of MP alone. These findings pave the way for future RCTs on surgical preparation and performance.

  9. Effects of emotional valence and three-dimensionality of visual stimuli on brain activation: an fMRI study.

    PubMed

    Dores, A R; Almeida, I; Barbosa, F; Castelo-Branco, M; Monteiro, L; Reis, M; de Sousa, L; Caldas, A Castro

    2013-01-01

    Examining changes in brain activation linked with emotion-inducing stimuli is essential to the study of emotions. Due to the ecological potential of techniques such as virtual reality (VR), inspection of whether brain activation in response to emotional stimuli can be modulated by the three-dimensional (3D) properties of the images is important. The current study sought to test whether the activation of brain areas involved in the emotional processing of scenarios of different valences can be modulated by 3D. Therefore, the focus was made on the interaction effect between emotion-inducing stimuli of different emotional valences (pleasant, unpleasant and neutral valences) and visualization types (2D, 3D). However, main effects were also analyzed. The effect of emotional valence and visualization types and their interaction were analyzed through a 3 × 2 repeated measures ANOVA. Post-hoc t-tests were performed under a ROI-analysis approach. The results show increased brain activation for the 3D affective-inducing stimuli in comparison with the same stimuli in 2D scenarios, mostly in cortical and subcortical regions that are related to emotional processing, in addition to visual processing regions. This study has the potential of clarify brain mechanisms involved in the processing of emotional stimuli (scenarios' valence) and their interaction with three-dimensionality.

  10. Audio-Visual Perception of 3D Cinematography: An fMRI Study Using Condition-Based and Computation-Based Analyses

    PubMed Central

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard “condition-based” designs, as well as “computational” methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli. PMID:24194828

  11. Audio-visual perception of 3D cinematography: an fMRI study using condition-based and computation-based analyses.

    PubMed

    Ogawa, Akitoshi; Bordier, Cecile; Macaluso, Emiliano

    2013-01-01

    The use of naturalistic stimuli to probe sensory functions in the human brain is gaining increasing interest. Previous imaging studies examined brain activity associated with the processing of cinematographic material using both standard "condition-based" designs, as well as "computational" methods based on the extraction of time-varying features of the stimuli (e.g. motion). Here, we exploited both approaches to investigate the neural correlates of complex visual and auditory spatial signals in cinematography. In the first experiment, the participants watched a piece of a commercial movie presented in four blocked conditions: 3D vision with surround sounds (3D-Surround), 3D with monaural sound (3D-Mono), 2D-Surround, and 2D-Mono. In the second experiment, they watched two different segments of the movie both presented continuously in 3D-Surround. The blocked presentation served for standard condition-based analyses, while all datasets were submitted to computation-based analyses. The latter assessed where activity co-varied with visual disparity signals and the complexity of auditory multi-sources signals. The blocked analyses associated 3D viewing with the activation of the dorsal and lateral occipital cortex and superior parietal lobule, while the surround sounds activated the superior and middle temporal gyri (S/MTG). The computation-based analyses revealed the effects of absolute disparity in dorsal occipital and posterior parietal cortices and of disparity gradients in the posterior middle temporal gyrus plus the inferior frontal gyrus. The complexity of the surround sounds was associated with activity in specific sub-regions of S/MTG, even after accounting for changes of sound intensity. These results demonstrate that the processing of naturalistic audio-visual signals entails an extensive set of visual and auditory areas, and that computation-based analyses can track the contribution of complex spatial aspects characterizing such life-like stimuli.

  12. Multifield-graphs: an approach to visualizing correlations in multifield scalar data.

    PubMed

    Sauber, Natascha; Theisel, Holger; Seidel, Hans-Peter

    2006-01-01

    We present an approach to visualizing correlations in 3D multifield scalar data. The core of our approach is the computation of correlation fields, which are scalar fields containing the local correlations of subsets of the multiple fields. While the visualization of the correlation fields can be done using standard 3D volume visualization techniques, their huge number makes selection and handling a challenge. We introduce the Multifield-Graph to give an overview of which multiple fields correlate and to show the strength of their correlation. This information guides the selection of informative correlation fields for visualization. We use our approach to visually analyze a number of real and synthetic multifield datasets.

  13. A Web platform for the interactive visualization and analysis of the 3D fractal dimension of MRI data.

    PubMed

    Jiménez, J; López, A M; Cruz, J; Esteban, F J; Navas, J; Villoslada, P; Ruiz de Miras, J

    2014-10-01

    This study presents a Web platform (http://3dfd.ujaen.es) for computing and analyzing the 3D fractal dimension (3DFD) from volumetric data in an efficient, visual and interactive way. The Web platform is specially designed for working with magnetic resonance images (MRIs) of the brain. The program estimates the 3DFD by calculating the 3D box-counting of the entire volume of the brain, and also of its 3D skeleton. All of this is done in a graphical, fast and optimized way by using novel technologies like CUDA and WebGL. The usefulness of the Web platform presented is demonstrated by its application in a case study where an analysis and characterization of groups of 3D MR images is performed for three neurodegenerative diseases: Multiple Sclerosis, Intrauterine Growth Restriction and Alzheimer's disease. To the best of our knowledge, this is the first Web platform that allows the users to calculate, visualize, analyze and compare the 3DFD from MRI images in the cloud. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. 3D documenatation of the petalaindera: digital heritage preservation methods using 3D laser scanner and photogrammetry

    NASA Astrophysics Data System (ADS)

    Sharif, Harlina Md; Hazumi, Hazman; Hafizuddin Meli, Rafiq

    2018-01-01

    3D imaging technologies have undergone massive revolution in recent years. Despite this rapid development, documentation of 3D cultural assets in Malaysia is still very much reliant upon conventional techniques such as measured drawings and manual photogrammetry. There is very little progress towards exploring new methods or advanced technologies to convert 3D cultural assets into 3D visual representation and visualization models that are easily accessible for information sharing. In recent years, however, the advent of computer vision (CV) algorithms make it possible to reconstruct 3D geometry of objects by using image sequences from digital cameras, which are then processed by web services and freeware applications. This paper presents a completed stage of an exploratory study that investigates the potentials of using CV automated image-based open-source software and web services to reconstruct and replicate cultural assets. By selecting an intricate wooden boat, Petalaindera, this study attempts to evaluate the efficiency of CV systems and compare it with the application of 3D laser scanning, which is known for its accuracy, efficiency and high cost. The final aim of this study is to compare the visual accuracy of 3D models generated by CV system, and 3D models produced by 3D scanning and manual photogrammetry for an intricate subject such as the Petalaindera. The final objective is to explore cost-effective methods that could provide fundamental guidelines on the best practice approach for digital heritage in Malaysia.

  15. Virtual Reality in Neurointervention.

    PubMed

    Ong, Chin Siang; Deib, Gerard; Yesantharao, Pooja; Qiao, Ye; Pakpoor, Jina; Hibino, Narutoshi; Hui, Ferdinand; Garcia, Juan R

    2018-06-01

    Virtual reality (VR) allows users to experience realistic, immersive 3D virtual environments with the depth perception and binocular field of view of real 3D settings. Newer VR technology has now allowed for interaction with 3D objects within these virtual environments through the use of VR controllers. This technical note describes our preliminary experience with VR as an adjunct tool to traditional angiographic imaging in the preprocedural workup of a patient with a complex pseudoaneurysm. Angiographic MRI data was imported and segmented to create 3D meshes of bilateral carotid vasculature. The 3D meshes were then projected into VR space, allowing the operator to inspect the carotid vasculature using a 3D VR headset as well as interact with the pseudoaneurysm (handling, rotation, magnification, and sectioning) using two VR controllers. 3D segmentation of a complex pseudoaneurysm in the distal cervical segment of the right internal carotid artery was successfully performed and projected into VR. Conventional and VR visualization modes were equally effective in identifying and classifying the pathology. VR visualization allowed the operators to manipulate the dataset to achieve a greater understanding of the anatomy of the parent vessel, the angioarchitecture of the pseudoaneurysm, and the surface contours of all visualized structures. This preliminary study demonstrates the feasibility of utilizing VR for preprocedural evaluation in patients with anatomically complex neurovascular disorders. This novel visualization approach may serve as a valuable adjunct tool in deciding patient-specific treatment plans and selection of devices prior to intervention.

  16. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roussos, Constantinos C.; Swingler, Jonathan

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  17. A 3D contact analysis approach for the visualization of the electrical contact asperities

    PubMed Central

    Swingler, Jonathan

    2017-01-01

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a ‘‘3D Contact Map’’ and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approach has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation. PMID:28105383

  18. A 3D contact analysis approach for the visualization of the electrical contact asperities

    DOE PAGES

    Roussos, Constantinos C.; Swingler, Jonathan

    2017-01-11

    The electrical contact is an important phenomenon that should be given into consideration to achieve better performance and long term reliability for the design of devices. Based upon this importance, the electrical contact interface has been visualized as a “3D Contact Map” and used in order to investigate the contact asperities. The contact asperities describe the structures above and below the contact spots (the contact spots define the 3D contact map) to the two conductors which make the contact system. The contact asperities require the discretization of the 3D microstructures of the contact system into voxels. A contact analysis approachmore » has been developed and introduced in this paper which shows the way to the 3D visualization of the contact asperities of a given contact system. For the discretization of 3D microstructure of contact system into voxels, X-ray Computed Tomography (CT) method is used in order to collect the data of a 250 V, 16 A rated AC single pole rocker switch which is used as a contact system for investigation.« less

  19. A mathematical formula to estimate in vivo thyroid volume from two-dimensional ultrasonography.

    PubMed

    Trimboli, Pierpaolo; Ruggieri, Massimo; Fumarola, Angela; D'Alò, Michele; Straniero, Andrea; Maiuolo, Amelia; Ulisse, Salvatore; D'Armiento, Massimino

    2008-08-01

    The determination of thyroid volume (TV) is required for the management of thyroid diseases. Since two-dimensional ultrasonography (2D-US) has become the accepted method for the assessment of TV (2D-US-TV), we verified whether it accurately assesses postsurgical measured TV (PS-TV). In 92 patients who underwent total thyroidectomy by conventional cervicotomy, 2D-US-TV obtained by the ellipsoid volume formula was compared to PS-TV, determined by the Archimedes' principle. Mean 2D-US-TV (23.9 +/- 14.8 mL) was significantly lower than mean PS-TV (33.4 +/- 20.1 mL). Underestimation was observed in 77% of cases, and it was related to gland multinodularity and/or nodular involvement of the isthmus, while 2D-US-TV matched the PS-TV in the remaining 21 cases (23%). A mathematical formula, to estimate PS-TV from US-TV, was derived using a linear model (Calculated-TV = [1.24 x 2D-US-TV]+ 3.66). Calculated-TV (mean value 33.4 +/- 18.3 mL) significantly (p < 0.01) increased from 21 (23%) to 31 (34%) of the cases that matched PS-TV. In addition, it significantly (p < 0.01) decreased from 77% to 27% the percentage of cases where PS-TV was underestimated as well as the range of the disagreement from 245% to 92%. This study shows that 2D-US does not provide an accurate estimation of TV and suggests that it can be improved by a mathematical model different from the ellipsoid model. If confirmed in prospective studies, this may contribute to a more appropriate management of thyroid diseases.

  20. Report of the 1988 2-D Intercomparison Workshop, chapter 3

    NASA Technical Reports Server (NTRS)

    Jackman, Charles H.; Brasseur, Guy; Soloman, Susan; Guthrie, Paul D.; Garcia, Rolando; Yung, Yuk L.; Gray, Lesley J.; Tung, K. K.; Ko, Malcolm K. W.; Isaken, Ivar

    1989-01-01

    Several factors contribute to the errors encountered. With the exception of the line-by-line model, all of the models employ simplifying assumptions that place fundamental limits on their accuracy and range of validity. For example, all 2-D modeling groups use the diffusivity factor approximation. This approximation produces little error in tropospheric H2O and CO2 cooling rates, but can produce significant errors in CO2 and O3 cooling rates at the stratopause. All models suffer from fundamental uncertainties in shapes and strengths of spectral lines. Thermal flux algorithms being used in 2-D tracer tranport models produce cooling rates that differ by as much as 40 percent for the same input model atmosphere. Disagreements of this magnitude are important since the thermal cooling rates must be subtracted from the almost-equal solar heating rates to derive the net radiative heating rates and the 2-D model diabatic circulation. For much of the annual cycle, the net radiative heating rates are comparable in magnitude to the cooling rate differences described. Many of the models underestimate the cooling rates in the middle and lower stratosphere. The consequences of these errors for the net heating rates and the diabatic circulation will depend on their meridional structure, which was not tested here. Other models underestimate the cooling near 1 mbar. Suchs errors pose potential problems for future interactive ozone assessment studies, since they could produce artificially-high temperatures and increased O3 destruction at these levels. These concerns suggest that a great deal of work is needed to improve the performance of thermal cooling rate algorithms used in the 2-D tracer transport models.

  1. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    PubMed

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  2. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    NASA Astrophysics Data System (ADS)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  3. Exploring the Impact of Visual Complexity Levels in 3d City Models on the Accuracy of Individuals' Orientation and Cognitive Maps

    NASA Astrophysics Data System (ADS)

    Rautenbach, V.; Çöltekin, A.; Coetzee, S.

    2015-08-01

    In this paper we report results from a qualitative user experiment (n=107) designed to contribute to understanding the impact of various levels of complexity (mainly based on levels of detail, i.e., LoD) in 3D city models, specifically on the participants' orientation and cognitive (mental) maps. The experiment consisted of a number of tasks motivated by spatial cognition theory where participants (among other things) were given orientation tasks, and in one case also produced sketches of a path they `travelled' in a virtual environment. The experiments were conducted in groups, where individuals provided responses on an answer sheet. The preliminary results based on descriptive statistics and qualitative sketch analyses suggest that very little information (i.e., a low LoD model of a smaller area) might have a negative impact on the accuracy of cognitive maps constructed based on a virtual experience. Building an accurate cognitive map is an inherently desired effect of the visualizations in planning tasks, thus the findings are important for understanding how to develop better-suited 3D visualizations such as 3D city models. In this study, we specifically discuss the suitability of different levels of visual complexity for development planning (urban planning), one of the domains where 3D city models are most relevant.

  4. Map-Reading Skill Development with 3D Technologies

    ERIC Educational Resources Information Center

    Carbonell Carrera, Carlos; Avarvarei, Bogdan Vlad; Chelariu, Elena Liliana; Draghia, Lucia; Avarvarei, Simona Catrinel

    2017-01-01

    Landforms often are represented on maps using abstract cartographic techniques that the reader must interpret for successful three-dimensional terrain visualization. New technologies in 3D landscape representation, both digital and tangible, offer the opportunity to visualize terrain in new ways. The results of a university student workshop, in…

  5. Role of Interaction in Enhancing the Epistemic Utility of 3D Mathematical Visualizations

    ERIC Educational Resources Information Center

    Liang, Hai-Ning; Sedig, Kamran

    2010-01-01

    Many epistemic activities, such as spatial reasoning, sense-making, problem solving, and learning, are information-based. In the context of epistemic activities involving mathematical information, learners often use interactive 3D mathematical visualizations (MVs). However, performing such activities is not always easy. Although it is generally…

  6. Subjective and objective evaluation of visual fatigue on viewing 3D display continuously

    NASA Astrophysics Data System (ADS)

    Wang, Danli; Xie, Yaohua; Yang, Xinpan; Lu, Yang; Guo, Anxiang

    2015-03-01

    In recent years, three-dimensional (3D) displays become more and more popular in many fields. Although they can provide better viewing experience, they cause extra problems, e.g., visual fatigue. Subjective or objective methods are usually used in discrete viewing processes to evaluate visual fatigue. However, little research combines subjective indicators and objective ones in an entirely continuous viewing process. In this paper, we propose a method to evaluate real-time visual fatigue both subjectively and objectively. Subjects watch stereo contents on a polarized 3D display continuously. Visual Reaction Time (VRT), Critical Flicker Frequency (CFF), Punctum Maximum Accommodation (PMA) and subjective scores of visual fatigue are collected before and after viewing. During the viewing process, the subjects rate the visual fatigue whenever it changes, without breaking the viewing process. At the same time, the blink frequency (BF) and percentage of eye closure (PERCLOS) of each subject is recorded for comparison to a previous research. The results show that the subjective visual fatigue and PERCLOS increase with time and they are greater in a continuous process than a discrete one. The BF increased with time during the continuous viewing process. Besides, the visual fatigue also induced significant changes of VRT, CFF and PMA.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venencia, C; Pino, M; Caussa, L

    Purpose: The purpose of this work was to quantify the dosimetric impact of Monte Carlo (MC) dose calculation algorithm compared to Pencil Beam (PB) on Spine SBRT with HybridARC (HA) and sliding windows IMRT (dMLC) treatment modality. Methods: A 6MV beam (1000MU/min) produced by a Novalis TX (BrainLAB-Varian) equipped with HDMLC was used. HA uses 1 arc plus 8 IMRT beams (arc weight between 60–40%) and dIMRT 15 beams. Plans were calculated using iPlan v.4.5.3 (BrainLAB) and the treatment dose prescription was 27Gy in 3 fractions. Dose calculation was done by PB (4mm spatial resolution) with heterogeneity correction and MCmore » dose to water (4mm spatial resolution and 4% mean variance). PTV and spinal cord dose comparison were done. Study was done on 12 patients. IROC Spine Phantom was used to validate HA and quantify dose variation using PB and MC algorithm. Results: The difference between PB and MC for PTV D98%, D95%, Dmean, D2% were 2.6% [−5.1, 6.8], 0.1% [−4.2, 5.4], 0.9% [−1.5, 3.8] and 2.4% [−0.5, 8.3]. The difference between PB and MC for spinal cord Dmax, D1.2cc and D0.35cc were 5.3% [−6.4, 18.4], 9% [−7.0, 17.0] and 7.6% [−0.6, 14.8] respectively. IROC spine phantom shows PTV TLD dose variation of 0.98% for PB and 1.01% for MC. Axial and sagittal film plane gamma index (5%-3mm) was 95% and 97% for PB and 95% and 99% for MC. Conclusion: PB slightly underestimates the dose for the PTV. For the spinal cord PB underestimates the dose and dose differences could be as high as 18% which could have unexpected clinical impact. CI shows no variation between PB and MC for both treatment modalities Treatment modalities have no impact with the dose calculation algorithms used. Following the IROC pass-fail criteria, treatment acceptance requirement was fulfilled for PB and MC.« less

  8. An image-guided planning system for endosseous oral implants.

    PubMed

    Verstreken, K; Van Cleynenbreugel, J; Martens, K; Marchal, G; van Steenberghe, D; Suetens, P

    1998-10-01

    A preoperative planning system for oral implant surgery was developed which takes as input computed tomographies (CT's) of the jaws. Two-dimensional (2-D) reslices of these axial CT slices orthogonal to a curve following the jaw arch are computed and shown together with three-dimensional (3-D) surface rendered models of the bone and computer-aided design (CAD)-like implant models. A technique is developed for scanning and visualizing an eventual existing removable prosthesis together with the bone structures. Evaluation of the planning done with the system shows a difference between 2-D and 3-D planning methods. Validation studies measure the benefits of the 3-D approach by comparing plans made in 2-D mode only with those further adjusted using the full 3-D visualization capabilities of the system. The benefits of a 3-D approach are then evident where a prosthesis is involved in the planning. For the majority of the patients, clinically important adjustments and optimizations to the 2-D plans are made once the 3-D visualization is enabled, effectively resulting in a better plan. The alterations are related to bone quality and quantity (p < 0.05), biomechanics (p < 0.005), and esthetics (p < 0.005), and are so obvious that the 3-D plan stands out clearly (p < 0.005). The improvements often avoid complications such as mandibular nerve damage, sinus perforations, fenestrations, or dehiscences.

  9. Techniques for efficient, real-time, 3D visualization of multi-modality cardiac data using consumer graphics hardware.

    PubMed

    Levin, David; Aladl, Usaf; Germano, Guido; Slomka, Piotr

    2005-09-01

    We exploit consumer graphics hardware to perform real-time processing and visualization of high-resolution, 4D cardiac data. We have implemented real-time, realistic volume rendering, interactive 4D motion segmentation of cardiac data, visualization of multi-modality cardiac data and 3D display of multiple series cardiac MRI. We show that an ATI Radeon 9700 Pro can render a 512x512x128 cardiac Computed Tomography (CT) study at 0.9 to 60 frames per second (fps) depending on rendering parameters and that 4D motion based segmentation can be performed in real-time. We conclude that real-time rendering and processing of cardiac data can be implemented on consumer graphics cards.

  10. 3D visualization of solar wind ion data from the Chang'E-1 exploration

    NASA Astrophysics Data System (ADS)

    Zhang, Tian; Sun, Yankui; Tang, Zesheng

    2011-10-01

    Chang'E-1 (abbreviation CE-1), China's first Moon-orbiting spacecraft launched in 2007, carried equipment called the Solar Wind Ion Detector (abbreviation SWID), which sent back tens of gigabytes of solar wind ion differential number flux data. These data are essential for furthering our understanding of the cislunar space environment. However, to fully comprehend and analyze these data presents considerable difficulties, not only because of their huge size (57 GB), but also because of their complexity. Therefore, a new 3D visualization method is developed to give a more intuitive representation than traditional 1D and 2D visualizations, and in particular to offer a better indication of the direction of the incident ion differential number flux and the relative spatial position of CE-1 with respect to the Sun, the Earth, and the Moon. First, a coordinate system named Selenocentric Solar Ecliptic (SSE) which is more suitable for our goal is chosen, and solar wind ion differential number flux vectors in SSE are calculated from Geocentric Solar Ecliptic System (GSE) and Moon Center Coordinate (MCC) coordinates of the spacecraft, and then the ion differential number flux distribution in SSE is visualized in 3D space. This visualization method is integrated into an interactive visualization analysis software tool named vtSWIDs, developed in MATLAB, which enables researchers to browse through numerous records and manipulate the visualization results in real time. The tool also provides some useful statistical analysis functions, and can be easily expanded.

  11. Prevalence of atherogenic dyslipidemia in primary care patients at moderate-very high risk of cardiovascular disease. Cardiovascular risk perception.

    PubMed

    Plana, Nuria; Ibarretxe, Daiana; Cabré, Anna; Ruiz, Emilio; Masana, Lluis

    2014-01-01

    Atherogenic dyslipidemia is an important risk factor for cardiovascular disease. We aim to determine atherogenic dyslipidemia prevalence in primary care patients at moderate-very high cardiovascular risk and its associated cardiovascular risk perception in Spain. This cross-sectional study included 1137 primary care patients. Patients had previous cardiovascular disease, diabetes mellitus, SCORE risk ≥ 3, severe hypertension or dyslipidemia. Atherogenic dyslipidemia was defined as low HDL-C (<40 mg/dL [males], <50 mg/dL [females]) and elevated triglycerides (≥ 150 mg/dL). A visual analog scale was used to define a perceived cardiovascular disease risk score. Mean age was 63.9 ± 9.7 years (64.6% males). The mean BMI was 29.1 ± 4.3 kg/m(2), and mean waist circumference 104.2 ± 12.7 cm (males), and 97.2 ± 14.0 cm (females). 29.4% were smokers, 76.4% had hypertension, 48.0% were diabetics, 24.7% had previous myocardial infarction, and 17.8% peripheral arterial disease. European guidelines classified 83.6% at very high cardiovascular risk. Recommended HDL-C levels were achieved by 50.1% of patients and 37.3% had triglycerides in the reference range. Target LDL-C was achieved by 8.8%. The overall atherogenic dyslipidemia prevalence was 27.1% (34.1% in diabetics). This prevalence in patients achieving target LDL-C was 21.4%. Cardiovascular risk perceived by patients was 4.3/10, while primary care physicians scored 5.7/10. When LDL-C levels are controlled, atherogenic dyslipidemia is more prevalent in those patients at highest cardiovascular risk and with diabetes. This highlights the importance of intervention strategies to prevent the residual vascular risk in this population. Both patients and physicians underestimated cardiovascular risk. Copyright © 2014 Sociedad Española de Arteriosclerosis. Published by Elsevier España. All rights reserved.

  12. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data.

    PubMed

    Ying, Gui-Shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-04-01

    To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field in the elderly. When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI -0.03 to 0.32D, p = 0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, p = 0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller p-values, while analysis of the worse eye provided larger p-values than mixed effects models and marginal models. In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision.

  13. 3D visualization techniques for the STEREO-mission

    NASA Astrophysics Data System (ADS)

    Wiegelmann, T.; Podlipnik, B.; Inhester, B.; Feng, L.; Ruan, P.

    The forthcoming STEREO-mission will observe the Sun from two different viewpoints We expect about 2GB data per day which ask for suitable data presentation techniques A key feature of STEREO is that it will provide for the first time a 3D-view of the Sun and the solar corona In our normal environment we see objects three dimensional because the light from real 3D objects needs different travel times to our left and right eye As a consequence we see slightly different images with our eyes which gives us information about the depth of objects and a corresponding 3D impression Techniques for the 3D-visualization of scientific and other data on paper TV computer screen cinema etc are well known e g two colour anaglyph technique shutter glasses polarization filters and head-mounted displays We discuss advantages and disadvantages of these techniques and how they can be applied to STEREO-data The 3D-visualization techniques are not limited to visual images but can be also used to show the reconstructed coronal magnetic field and energy and helicity distribution In the advent of STEREO we test the method with data from SOHO which provides us different viewpoints by the solar rotation This restricts the analysis to structures which remain stationary for several days Real STEREO-data will not be affected by these limitations however

  14. 3D visualization of the scoliotic spine: longitudinal studies, data acquisition, and radiation dosage constraints

    NASA Astrophysics Data System (ADS)

    Kalvin, Alan D.; Adler, Roy L.; Margulies, Joseph Y.; Tresser, Charles P.; Wu, Chai W.

    1999-05-01

    Decision making in the treatment of scoliosis is typically based on longitudinal studies that involve the imaging and visualization the progressive degeneration of a patient's spine over a period of years. Some patients will need surgery if their spinal deformation exceeds a certain degree of severity. Currently, surgeons rely on 2D measurements, obtained from x-rays, to quantify spinal deformation. Clearly working only with 2D measurements seriously limits the surgeon's ability to infer 3D spinal pathology. Standard CT scanning is not a practical solution for obtaining 3D spinal measurements of scoliotic patients. Because it would expose the patient to a prohibitively high dose of radiation. We have developed 2 new CT-based methods of 3D spinal visualization that produce 3D models of the spine by integrating a very small number of axial CT slices with data obtained from CT scout data. In the first method the scout data are converted to sinogram data, and then processed by a tomographic image reconstruction algorithm. In the second method, the vertebral boundaries are detected in the scout data, and these edges are then used as linear constraints to determine 2D convex hulls of the vertebrae.

  15. [3D-visualization by MRI for surgical planning of Wilms tumors].

    PubMed

    Schenk, J P; Waag, K-L; Graf, N; Wunsch, R; Jourdan, C; Behnisch, W; Tröger, J; Günther, P

    2004-10-01

    To improve surgical planning of kidney tumors in childhood (Wilms tumor, mesoblastic nephroma) after radiologic verification of the presumptive diagnosis with interactive colored 3D-animation in MRI. In 7 children (1 boy, 6 girls) with a mean age of 3 years (1 month to 11 years), the MRI database (DICOM) was processed with a raycasting-based 3D-volume-rendering software (VG Studio Max 1.1/Volume Graphics). The abdominal MRI-sequences (coronal STIR, coronal T1 TSE, transverse T1/T2 TSE, sagittal T2 TSE, transverse and coronal T1 TSE post contrast) were obtained with a 0.5T unit in 4 - 6 mm slices. Additionally, a phase-contrast-MR-angiography was applied to delineate the large abdominal and retroperitoneal vessels. A notebook was used to demonstrate the 3D-visualization for surgical planning before surgery and during the surgical procedure. In all 7 cases, the surgical approach was influenced by interactive 3D-animation and the information found useful for surgical planning. Above all, the 3D-visualization demonstrates the mass effect of the Wilms tumor and its anatomical relationship to the renal hilum and to the rest of the kidney as well as the topographic relationship of the tumor to the critical vessels. One rupture of the tumor capsule occurred as a surgical complication. For the surgeon, the transformation of the anatomical situation from MRI to the surgical situs has become much easier. For surgical planning of Wilms tumors, the 3D-visualization with 3D-animation of the situs helps to transfer important information from the pediatric radiologist to the pediatric surgeon and optimizes the surgical preparation. A reduction of complications is to be expected.

  16. Perceived reachability in hemispace.

    PubMed

    Gabbard, Carl; Ammar, Diala; Rodrigues, Luis

    2005-07-01

    A common observation in studies of perceived (imagined) compared to actual movement in a reaching paradigm is the tendency to overestimate. Of the studies noted, reaching tasks have been presented in the general midline range. In the present study, strong right-handers were asked to judge the reachability of visual targets projected onto a table surface at midline, right- (RVF), and left-visual fields (LVF). Midline results support those of previous studies, showing an overestimation bias. In contrast, participants revealed the tendency to underestimate their reachability in RVF and LVF. These findings are discussed from the perspective of actor 'confidence' (a cognitive state) possibly associated with visual information, perceived ability, and perceived task demands.

  17. Revisiting flow maps: a classification and a 3D alternative to visual clutter

    NASA Astrophysics Data System (ADS)

    Gu, Yuhang; Kraak, Menno-Jan; Engelhardt, Yuri

    2018-05-01

    Flow maps have long been servicing people in exploring movement by representing origin-destination data (OD data). Due to recent developments in data collecting techniques the amount of movement data is increasing dramatically. With such huge amounts of data, visual clutter in flow maps is becoming a challenge. This paper revisits flow maps, provides an overview of the characteristics of OD data and proposes a classification system for flow maps. For dealing with problems of visual clutter, 3D flow maps are proposed as potential alternative to 2D flow maps.

  18. Interactive access and management for four-dimensional environmental data sets using McIDAS

    NASA Technical Reports Server (NTRS)

    Hibbard, William L.; Tripoli, Gregory J.

    1991-01-01

    Significant accomplishments in the following areas are presented: (1) enhancements to the visualization of 5-D data sets (VIS-5D); (2) development of the visualization of global images (VIS-GI) application; (3) design of the Visualization for Algorithm Development (VIS-AD) System; and (4) numerical modeling applications. The focus of current research and future research plans is presented and the following topics are addressed: (1) further enhancements to VIS-5D; (2) generalization and enhancement of the VIS-GI application; (3) the implementation of the VIS-AD System; and (4) plans for modeling applications.

  19. Astronomy Data Visualization with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2015-08-01

    We present innovative methods and techniques for using Blender, a 3D software package, in the visualization of astronomical data. N-body simulations, data cubes, galaxy and stellar catalogs, and planetary surface maps can be rendered in high quality videos for exploratory data analysis. Blender's API is Python based, making it advantageous for use in astronomy with flexible libraries like astroPy. Examples will be exhibited that showcase the features of the software in astronomical visualization paradigms. 2D and 3D voxel texture applications, animations, camera movement, and composite renders are introduced to the astronomer's toolkit and how they mesh with different forms of data.

  20. Do you see what I see? A comparative investigation of the Delboeuf illusion in humans (Homo sapiens), rhesus monkeys (Macaca mulatta), and capuchin monkeys (Cebus apella).

    PubMed

    Parrish, Audrey E; Brosnan, Sarah F; Beran, Michael J

    2015-10-01

    Studying visual illusions is critical to understanding typical visual perception. We investigated whether rhesus monkeys (Macaca mulatta) and capuchin monkeys (Cebus apella) perceived the Delboeuf illusion in a similar manner as human adults (Homo sapiens). To test this, in Experiment 1, we presented monkeys and humans with a relative discrimination task that required subjects to choose the larger of 2 central dots that were sometimes encircled by concentric rings. As predicted, humans demonstrated evidence of the Delboeuf illusion, overestimating central dots when small rings surrounded them and underestimating the size of central dots when large rings surrounded them. However, monkeys did not show evidence of the illusion. To rule out an alternate explanation, in Experiment 2, we presented all species with an absolute classification task that required them to classify a central dot as "small" or "large." We presented a range of ring sizes to determine whether the Delboeuf illusion would occur for any dot-to-ring ratios. Here, we found evidence of the Delboeuf illusion in all 3 species. Humans and monkeys underestimated central dot size to a progressively greater degree with progressively larger rings. The Delboeuf illusion now has been extended to include capuchin monkeys and rhesus monkeys, and through such comparative investigations we can better evaluate hypotheses regarding illusion perception among nonhuman animals. (c) 2015 APA, all rights reserved).

  1. Tactical decisions for changeable cuttlefish camouflage: visual cues for choosing masquerade are relevant from a greater distance than visual cues used for background matching.

    PubMed

    Buresch, Kendra C; Ulmer, Kimberly M; Cramer, Corinne; McAnulty, Sarah; Davison, William; Mäthger, Lydia M; Hanlon, Roger T

    2015-10-01

    Cuttlefish use multiple camouflage tactics to evade their predators. Two common tactics are background matching (resembling the background to hinder detection) and masquerade (resembling an uninteresting or inanimate object to impede detection or recognition). We investigated how the distance and orientation of visual stimuli affected the choice of these two camouflage tactics. In the current experiments, cuttlefish were presented with three visual cues: 2D horizontal floor, 2D vertical wall, and 3D object. Each was placed at several distances: directly beneath (in a circle whose diameter was one body length (BL); at zero BL [(0BL); i.e., directly beside, but not beneath the cuttlefish]; at 1BL; and at 2BL. Cuttlefish continued to respond to 3D visual cues from a greater distance than to a horizontal or vertical stimulus. It appears that background matching is chosen when visual cues are relevant only in the immediate benthic surroundings. However, for masquerade, objects located multiple body lengths away remained relevant for choice of camouflage. © 2015 Marine Biological Laboratory.

  2. Efficient LBM visual simulation on face-centered cubic lattices.

    PubMed

    Petkov, Kaloian; Qiu, Feng; Fan, Zhe; Kaufman, Arie E; Mueller, Klaus

    2009-01-01

    The Lattice Boltzmann method (LBM) for visual simulation of fluid flow generally employs cubic Cartesian (CC) lattices such as the D3Q13 and D3Q19 lattices for the particle transport. However, the CC lattices lead to suboptimal representation of the simulation space. We introduce the face-centered cubic (FCC) lattice, fD3Q13, for LBM simulations. Compared to the CC lattices, the fD3Q13 lattice creates a more isotropic sampling of the simulation domain and its single lattice speed (i.e., link length) simplifies the computations and data storage. Furthermore, the fD3Q13 lattice can be decomposed into two independent interleaved lattices, one of which can be discarded, which doubles the simulation speed. The resulting LBM simulation can be efficiently mapped to the GPU, further increasing the computational performance. We show the numerical advantages of the FCC lattice on channeled flow in 2D and the flow-past-a-sphere benchmark in 3D. In both cases, the comparison is against the corresponding CC lattices using the analytical solutions for the systems as well as velocity field visualizations. We also demonstrate the performance advantages of the fD3Q13 lattice for interactive simulation and rendering of hot smoke in an urban environment using thermal LBM.

  3. Applying microCT and 3D visualization to Jurassic silicified conifer seed cones: A virtual advantage over thin-sectioning1

    PubMed Central

    Gee, Carole T.

    2013-01-01

    • Premise of the study: As an alternative to conventional thin-sectioning, which destroys fossil material, high-resolution X-ray computed tomography (also called microtomography or microCT) integrated with scientific visualization, three-dimensional (3D) image segmentation, size analysis, and computer animation is explored as a nondestructive method of imaging the internal anatomy of 150-million-year-old conifer seed cones from the Late Jurassic Morrison Formation, USA, and of recent and other fossil cones. • Methods: MicroCT was carried out on cones using a General Electric phoenix v|tome|x s 240D, and resulting projections were processed with visualization software to produce image stacks of serial single sections for two-dimensional (2D) visualization, 3D segmented reconstructions with targeted structures in color, and computer animations. • Results: If preserved in differing densities, microCT produced images of internal fossil tissues that showed important characters such as seed phyllotaxy or number of seeds per cone scale. Color segmentation of deeply embedded seeds highlighted the arrangement of seeds in spirals. MicroCT of recent cones was even more effective. • Conclusions: This is the first paper on microCT integrated with 3D segmentation and computer animation applied to silicified seed cones, which resulted in excellent 2D serial sections and segmented 3D reconstructions, revealing features requisite to cone identification and understanding of strobilus construction. PMID:25202495

  4. Advances in visual representation of molecular potentials.

    PubMed

    Du, Qi-Shi; Huang, Ri-Bo; Chou, Kuo-Chen

    2010-06-01

    The recent advances in visual representations of molecular properties in 3D space are summarized, and their applications in molecular modeling study and rational drug design are introduced. The visual representation methods provide us with detailed insights into protein-ligand interactions, and hence can play a major role in elucidating the structure or reactivity of a biomolecular system. Three newly developed computation and visualization methods for studying the physical and chemical properties of molecules are introduced, including their electrostatic potential, lipophilicity potential and excess chemical potential. The newest application examples of visual representations in structure-based rational drug are presented. The 3D electrostatic potentials, calculated using the empirical method (EM-ESP), in which the classical Coulomb equation and traditional atomic partial changes are discarded, are highly consistent with the results by the higher level quantum chemical method. The 3D lipophilicity potentials, computed by the heuristic molecular lipophilicity potential method based on the principles of quantum mechanics and statistical mechanics, are more accurate and reliable than those by using the traditional empirical methods. The 3D excess chemical potentials, derived by the reference interaction site model-hypernetted chain theory, provide a new tool for computational chemistry and molecular modeling. For structure-based drug design, the visual representations of molecular properties will play a significant role in practical applications. It is anticipated that the new advances in computational chemistry will stimulate the development of molecular modeling methods, further enriching the visual representation techniques for rational drug design, as well as other relevant fields in life science.

  5. New Technologies for Acquisition and 3-D Visualization of Geophysical and Other Data Types Combined for Enhanced Understandings and Efficiencies of Oil and Gas Operations, Deepwater Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Thomson, J. A.; Gee, L. J.; George, T.

    2002-12-01

    This presentation shows results of a visualization method used to display and analyze multiple data types in a geospatially referenced three-dimensional (3-D) space. The integrated data types include sonar and seismic geophysical data, pipeline and geotechnical engineering data, and 3-D facilities models. Visualization of these data collectively in proper 3-D orientation yields insights and synergistic understandings not previously obtainable. Key technological components of the method are: 1) high-resolution geophysical data obtained using a newly developed autonomous underwater vehicle (AUV), 2) 3-D visualization software that delivers correctly positioned display of multiple data types and full 3-D flight navigation within the data space and 3) a highly immersive visualization environment (HIVE) where multidisciplinary teams can work collaboratively to develop enhanced understandings of geospatially complex data relationships. The initial study focused on an active deepwater development area in the Green Canyon protraction area, Gulf of Mexico. Here several planned production facilities required detailed, integrated data analysis for design and installation purposes. To meet the challenges of tight budgets and short timelines, an innovative new method was developed based on the combination of newly developed technologies. Key benefits of the method include enhanced understanding of geologically complex seabed topography and marine soils yielding safer and more efficient pipeline and facilities siting. Environmental benefits include rapid and precise identification of potential locations of protected deepwater biological communities for avoidance and protection during exploration and production operations. In addition, the method allows data presentation and transfer of learnings to an audience outside the scientific and engineering team. This includes regulatory personnel, marine archaeologists, industry partners and others.

  6. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    PubMed

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.

  7. MEVA - An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices

    PubMed Central

    Helbig, Carolin; Bilke, Lars; Bauer, Hans-Stefan; Böttinger, Michael; Kolditz, Olaf

    2015-01-01

    Background To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work. Methods and Results Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results. PMID:25915061

  8. LiveView3D: Real Time Data Visualization for the Aerospace Testing Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; Fleming, Gary A.

    2006-01-01

    This paper addresses LiveView3D, a software package and associated data visualization system for use in the aerospace testing environment. The LiveView3D system allows researchers to graphically view data from numerous wind tunnel instruments in real time in an interactive virtual environment. The graphical nature of the LiveView3D display provides researchers with an intuitive view of the measurement data, making it easier to interpret the aerodynamic phenomenon under investigation. LiveView3D has been developed at the NASA Langley Research Center and has been applied in the Langley Unitary Plan Wind Tunnel (UPWT). This paper discusses the capabilities of the LiveView3D system, provides example results from its application in the UPWT, and outlines features planned for future implementation.

  9. Memory and visual search in naturalistic 2D and 3D environments

    PubMed Central

    Li, Chia-Ling; Aivar, M. Pilar; Kit, Dmitry M.; Tong, Matthew H.; Hayhoe, Mary M.

    2016-01-01

    The role of memory in guiding attention allocation in daily behaviors is not well understood. In experiments with two-dimensional (2D) images, there is mixed evidence about the importance of memory. Because the stimulus context in laboratory experiments and daily behaviors differs extensively, we investigated the role of memory in visual search, in both two-dimensional (2D) and three-dimensional (3D) environments. A 3D immersive virtual apartment composed of two rooms was created, and a parallel 2D visual search experiment composed of snapshots from the 3D environment was developed. Eye movements were tracked in both experiments. Repeated searches for geometric objects were performed to assess the role of spatial memory. Subsequently, subjects searched for realistic context objects to test for incidental learning. Our results show that subjects learned the room-target associations in 3D but less so in 2D. Gaze was increasingly restricted to relevant regions of the room with experience in both settings. Search for local contextual objects, however, was not facilitated by early experience. Incidental fixations to context objects do not necessarily benefit search performance. Together, these results demonstrate that memory for global aspects of the environment guides search by restricting allocation of attention to likely regions, whereas task relevance determines what is learned from the active search experience. Behaviors in 2D and 3D environments are comparable, although there is greater use of memory in 3D. PMID:27299769

  10. 3D printing meets computational astrophysics: deciphering the structure of η Carinae's inner colliding winds

    NASA Astrophysics Data System (ADS)

    Madura, T. I.; Clementel, N.; Gull, T. R.; Kruip, C. J. H.; Paardekooper, J.-P.

    2015-06-01

    We present the first 3D prints of output from a supercomputer simulation of a complex astrophysical system, the colliding stellar winds in the massive (≳120 M⊙), highly eccentric (e ˜ 0.9) binary star system η Carinae. We demonstrate the methodology used to incorporate 3D interactive figures into a PDF (Portable Document Format) journal publication and the benefits of using 3D visualization and 3D printing as tools to analyse data from multidimensional numerical simulations. Using a consumer-grade 3D printer (MakerBot Replicator 2X), we successfully printed 3D smoothed particle hydrodynamics simulations of η Carinae's inner (r ˜ 110 au) wind-wind collision interface at multiple orbital phases. The 3D prints and visualizations reveal important, previously unknown `finger-like' structures at orbital phases shortly after periastron (φ ˜ 1.045) that protrude radially outwards from the spiral wind-wind collision region. We speculate that these fingers are related to instabilities (e.g. thin-shell, Rayleigh-Taylor) that arise at the interface between the radiatively cooled layer of dense post-shock primary-star wind and the fast (3000 km s-1), adiabatic post-shock companion-star wind. The success of our work and easy identification of previously unrecognized physical features highlight the important role 3D printing and interactive graphics can play in the visualization and understanding of complex 3D time-dependent numerical simulations of astrophysical phenomena.

  11. Stereo chromatic contrast sensitivity model to blue-yellow gratings.

    PubMed

    Yang, Jiachen; Lin, Yancong; Liu, Yun

    2016-03-07

    As a fundamental metric of human visual system (HVS), contrast sensitivity function (CSF) is typically measured by sinusoidal gratings at the detection of thresholds for psychophysically defined cardinal channels: luminance, red-green, and blue-yellow. Chromatic CSF, which is a quick and valid index to measure human visual performance and various retinal diseases in two-dimensional (2D) space, can not be directly applied into the measurement of human stereo visual performance. And no existing perception model considers the influence of chromatic CSF of inclined planes on depth perception in three-dimensional (3D) space. The main aim of this research is to extend traditional chromatic contrast sensitivity characteristics to 3D space and build a model applicable in 3D space, for example, strengthening stereo quality of 3D images. This research also attempts to build a vision model or method to check human visual characteristics of stereo blindness. In this paper, CRT screen was clockwise and anti-clockwise rotated respectively to form the inclined planes. Four inclined planes were selected to investigate human chromatic vision in 3D space and contrast threshold of each inclined plane was measured with 18 observers. Stimuli were isoluminant blue-yellow sinusoidal gratings. Horizontal spatial frequencies ranged from 0.05 to 5 c/d. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. According to the relationship between spatial frequency of inclined plane and horizontal spatial frequency, the chromatic contrast sensitivity characteristics in 3D space have been modeled based on the experimental data. The results show that the proposed model can well predicted human chromatic contrast sensitivity characteristics in 3D space.

  12. My Corporis Fabrica Embryo: An ontology-based 3D spatio-temporal modeling of human embryo development.

    PubMed

    Rabattu, Pierre-Yves; Massé, Benoit; Ulliana, Federico; Rousset, Marie-Christine; Rohmer, Damien; Léon, Jean-Claude; Palombi, Olivier

    2015-01-01

    Embryology is a complex morphologic discipline involving a set of entangled mechanisms, sometime difficult to understand and to visualize. Recent computer based techniques ranging from geometrical to physically based modeling are used to assist the visualization and the simulation of virtual humans for numerous domains such as surgical simulation and learning. On the other side, the ontology-based approach applied to knowledge representation is more and more successfully adopted in the life-science domains to formalize biological entities and phenomena, thanks to a declarative approach for expressing and reasoning over symbolic information. 3D models and ontologies are two complementary ways to describe biological entities that remain largely separated. Indeed, while many ontologies providing a unified formalization of anatomy and embryology exist, they remain only descriptive and make the access to anatomical content of complex 3D embryology models and simulations difficult. In this work, we present a novel ontology describing the development of the human embryology deforming 3D models. Beyond describing how organs and structures are composed, our ontology integrates a procedural description of their 3D representations, temporal deformation and relations with respect to their developments. We also created inferences rules to express complex connections between entities. It results in a unified description of both the knowledge of the organs deformation and their 3D representations enabling to visualize dynamically the embryo deformation during the Carnegie stages. Through a simplified ontology, containing representative entities which are linked to spatial position and temporal process information, we illustrate the added-value of such a declarative approach for interactive simulation and visualization of 3D embryos. Combining ontologies and 3D models enables a declarative description of different embryological models that capture the complexity of human developmental anatomy. Visualizing embryos with 3D geometric models and their animated deformations perhaps paves the way towards some kind of hypothesis-driven application. These can also be used to assist the learning process of this complex knowledge. http://www.mycorporisfabrica.org/.

  13. 3D Flow Visualization Using Texture Advection

    NASA Technical Reports Server (NTRS)

    Kao, David; Zhang, Bing; Kim, Kwansik; Pang, Alex; Moran, Pat (Technical Monitor)

    2001-01-01

    Texture advection is an effective tool for animating and investigating 2D flows. In this paper, we discuss how this technique can be extended to 3D flows. In particular, we examine the use of 3D and 4D textures on 3D synthetic and computational fluid dynamics flow fields.

  14. How spatial abilities and dynamic visualizations interplay when learning functional anatomy with 3D anatomical models.

    PubMed

    Berney, Sandra; Bétrancourt, Mireille; Molinari, Gaëlle; Hoyek, Nady

    2015-01-01

    The emergence of dynamic visualizations of three-dimensional (3D) models in anatomy curricula may be an adequate solution for spatial difficulties encountered with traditional static learning, as they provide direct visualization of change throughout the viewpoints. However, little research has explored the interplay between learning material presentation formats, spatial abilities, and anatomical tasks. First, to understand the cognitive challenges a novice learner would be faced with when first exposed to 3D anatomical content, a six-step cognitive task analysis was developed. Following this, an experimental study was conducted to explore how presentation formats (dynamic vs. static visualizations) support learning of functional anatomy, and affect subsequent anatomical tasks derived from the cognitive task analysis. A second aim was to investigate the interplay between spatial abilities (spatial visualization and spatial relation) and presentation formats when the functional anatomy of a 3D scapula and the associated shoulder flexion movement are learned. Findings showed no main effect of the presentation formats on performances, but revealed the predictive influence of spatial visualization and spatial relation abilities on performance. However, an interesting interaction between presentation formats and spatial relation ability for a specific anatomical task was found. This result highlighted the influence of presentation formats when spatial abilities are involved as well as the differentiated influence of spatial abilities on anatomical tasks. © 2015 American Association of Anatomists.

  15. A high-level 3D visualization API for Java and ImageJ.

    PubMed

    Schmid, Benjamin; Schindelin, Johannes; Cardona, Albert; Longair, Mark; Heisenberg, Martin

    2010-05-21

    Current imaging methods such as Magnetic Resonance Imaging (MRI), Confocal microscopy, Electron Microscopy (EM) or Selective Plane Illumination Microscopy (SPIM) yield three-dimensional (3D) data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  16. Binocular coordination in response to stereoscopic stimuli

    NASA Astrophysics Data System (ADS)

    Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.

    2009-02-01

    Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.

  17. 3D Visual Proxemics: Recognizing Human Interactions in 3D from a Single Image (Open Access)

    DTIC Science & Technology

    2013-06-28

    accurate tracking and identity associations of people’s motions in videos. Proxemics is a subfield of anthropology that involves study of people...cinematography where the shot composition and camera viewpoint is optimized for visual weight [1]. In cinema , a shot is either a long shot, a medium

  18. APPLICATION OF COMPUTER-AIDED TOMOGRAPHY TO VISUALIZE AND QUANTIFY BIOGENIC STRUCTURES IN MARINE SEDIMENTS

    EPA Science Inventory

    We used computer-aided tomography (CT) for 3D visualization and 2D analysis of

    marine sediment cores from 3 stations (at 10, 75 and 118 m depths) with different environmental

    impact. Biogenic structures such as tubes and burrows were quantified and compared among st...

  19. Interactive Classification of Construction Materials: Feedback Driven Framework for Annotation and Analysis of 3d Point Clouds

    NASA Astrophysics Data System (ADS)

    Hess, M. R.; Petrovic, V.; Kuester, F.

    2017-08-01

    Digital documentation of cultural heritage structures is increasingly more common through the application of different imaging techniques. Many works have focused on the application of laser scanning and photogrammetry techniques for the acquisition of threedimensional (3D) geometry detailing cultural heritage sites and structures. With an abundance of these 3D data assets, there must be a digital environment where these data can be visualized and analyzed. Presented here is a feedback driven visualization framework that seamlessly enables interactive exploration and manipulation of massive point cloud data. The focus of this work is on the classification of different building materials with the goal of building more accurate as-built information models of historical structures. User defined functions have been tested within the interactive point cloud visualization framework to evaluate automated and semi-automated classification of 3D point data. These functions include decisions based on observed color, laser intensity, normal vector or local surface geometry. Multiple case studies are presented here to demonstrate the flexibility and utility of the presented point cloud visualization framework to achieve classification objectives.

  20. Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method

    NASA Astrophysics Data System (ADS)

    Kadioglu, S.

    2009-04-01

    Transparent 3D Visualization of Archaeological Remains in Roman Site in Ankara-Turkey with Ground Penetrating Radar Method Selma KADIOGLU Ankara University, Faculty of Engineering, Department of Geophysical Engineering, 06100 Tandogan/ANKARA-TURKEY kadioglu@eng.ankara.edu.tr Anatolia has always been more the point of transit, a bridge between West and East. Anatolia has been a home for ideas moving from all directions. So it is that in the Roman and post-Roman periods the role of Anatolia in general and of Ancyra (the Roman name of Ankara) in particular was of the greatest importance. Now, the visible archaeological remains of Roman period in Ankara are Roman Bath, Gymnasium, the Temple of Augustus of Rome, Street, Theatre, City Defence-Wall. The Caesar Augustus, the first Roman Emperor, conquered Asia Minor in 25 BC. Then a marble temple was built in Ancyra, the administrative capital of province, today the capital of Turkish Republic, Ankara. This monument was consecrated to the Empreror and to the Goddess Rome. This temple is supposed to have built over an earlier temple dedicated to Kybele and Men between 25 -20 BC. After the death of the Augustus in 14AD, a copy of the text of "Res Gestae Divi Augusti" was inscribed on the interior of the pronaos in Latin, whereas a Greek translation is also present on an exterior wall of the cella. In the 5th century, it was converted in to a church by the Byzantines. The aim of this study is to determine old buried archaeological remains in the Augustus temple, Roman Bath and in the governorship agora in Ulus district. These remains were imaged with transparent three dimensional (3D) visualization of the ground penetrating radar (GPR) data. Parallel two dimensional (2D) GPR profile data were acquired in the study areas, and then a 3D data volume were built using parallel 2D GPR data. A simplified amplitude-colour range and appropriate opacity function were constructed and transparent 3D image were obtained to activate buried remains. Interactive interpretation was done by using sub-blocks of the transparent 3D volume. The opacity function coefficients were increased while deep sub-blocks were visualized. Therefore amplitudes of electromagnetic wave field were controlled by changing opacity coefficients with depth. The transparent 3D visualization provided to identify the archaeological remains on native locations with depth in a 3D volume. According to the visualization results, in the governorship agora, the broken Roman Street was identified under the remnants of Ottoman, Seljuk's and Byzantine periods respectively at 4m depths and a colonnaded portico was determined in the governorship garden. Diggings encouraged the 3D image results. In the Augustus temple, very complex remnant structures including cubbies were determined in front of the east wall of the temple. The remnant walls very near to the surface were continued so deep in the 3D image. The transparent 3D visualization results overlapped with the digging results of the Augustus temple.

  1. A generalized 3D framework for visualization of planetary data.

    NASA Astrophysics Data System (ADS)

    Larsen, K. W.; De Wolfe, A. W.; Putnam, B.; Lindholm, D. M.; Nguyen, D.

    2016-12-01

    As the volume and variety of data returned from planetary exploration missions continues to expand, new tools and technologies are needed to explore the data and answer questions about the formation and evolution of the solar system. We have developed a 3D visualization framework that enables the exploration of planetary data from multiple instruments on the MAVEN mission to Mars. This framework not only provides the opportunity for cross-instrument visualization, but is extended to include model data as well, helping to bridge the gap between theory and observation. This is made possible through the use of new web technologies, namely LATIS, a data server that can stream data and spacecraft ephemerides to a web browser, and Cesium, a Javascript library for 3D globes. The common visualization framework we have developed is flexible and modular so that it can easily be adapted for additional missions. In addition to demonstrating the combined data and modeling capabilities of the system for the MAVEN mission, we will display the first ever near real-time `QuickLook', interactive, 4D data visualization for the Magnetospheric Multiscale Mission (MMS). In this application, data from all four spacecraft can be manipulated and visualized as soon as the data is ingested into the MMS Science Data Center, less than one day after collection.

  2. Breast tumour visualization using 3D quantitative ultrasound methods

    NASA Astrophysics Data System (ADS)

    Gangeh, Mehrdad J.; Raheem, Abdul; Tadayyon, Hadi; Liu, Simon; Hadizad, Farnoosh; Czarnota, Gregory J.

    2016-04-01

    Breast cancer is one of the most common cancer types accounting for 29% of all cancer cases. Early detection and treatment has a crucial impact on improving the survival of affected patients. Ultrasound (US) is non-ionizing, portable, inexpensive, and real-time imaging modality for screening and quantifying breast cancer. Due to these attractive attributes, the last decade has witnessed many studies on using quantitative ultrasound (QUS) methods in tissue characterization. However, these studies have mainly been limited to 2-D QUS methods using hand-held US (HHUS) scanners. With the availability of automated breast ultrasound (ABUS) technology, this study is the first to develop 3-D QUS methods for the ABUS visualization of breast tumours. Using an ABUS system, unlike the manual 2-D HHUS device, the whole patient's breast was scanned in an automated manner. The acquired frames were subsequently examined and a region of interest (ROI) was selected in each frame where tumour was identified. Standard 2-D QUS methods were used to compute spectral and backscatter coefficient (BSC) parametric maps on the selected ROIs. Next, the computed 2-D parameters were mapped to a Cartesian 3-D space, interpolated, and rendered to provide a transparent color-coded visualization of the entire breast tumour. Such 3-D visualization can potentially be used for further analysis of the breast tumours in terms of their size and extension. Moreover, the 3-D volumetric scans can be used for tissue characterization and the categorization of breast tumours as benign or malignant by quantifying the computed parametric maps over the whole tumour volume.

  3. Usefulness of real-time three-dimensional ultrasonography in percutaneous nephrostomy: an animal study.

    PubMed

    Hongzhang, Hong; Xiaojuan, Qin; Shengwei, Zhang; Feixiang, Xiang; Yujie, Xu; Haibing, Xiao; Gallina, Kazobinka; Wen, Ju; Fuqing, Zeng; Xiaoping, Zhang; Mingyue, Ding; Huageng, Liang; Xuming, Zhang

    2018-05-17

    To evaluate the effect of real-time three-dimensional (3D) ultrasonography (US) in guiding percutaneous nephrostomy (PCN). A hydronephrosis model was devised in which the ureters of 16 beagles were obstructed. The beagles were divided equally into groups 1 and 2. In group 1, the PCN was performed using real-time 3D US guidance, while in group 2 the PCN was guided using two-dimensional (2D) US. Visualization of the needle tract, length of puncture time and number of puncture times were recorded for the two groups. In group 1, score for visualization of the needle tract, length of puncture time and number of puncture times were 3, 7.3 ± 3.1 s and one time, respectively. In group 2, the respective results were 1.4 ± 0.5, 21.4 ± 5.8 s and 2.1 ± 0.6 times. The visualization of needle tract in group 1 was superior to that in group 2, and length of puncture time and number of puncture times were both lower in group 1 than in group 2. Real-time 3D US-guided PCN is superior to 2D US-guided PCN in terms of visualization of needle tract and the targeted pelvicalyceal system, leading to quick puncture. Real-time 3D US-guided puncture of the kidney holds great promise for clinical implementation in PCN. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.

  4. Visualization of electronic density

    DOE PAGES

    Grosso, Bastien; Cooper, Valentino R.; Pine, Polina; ...

    2015-04-22

    An atom’s volume depends on its electronic density. Although this density can only be evaluated exactly for hydrogen-like atoms, there are many excellent numerical algorithms and packages to calculate it for other materials. 3D visualization of charge density is challenging, especially when several molecular/atomic levels are intertwined in space. We explore several approaches to 3D charge density visualization, including the extension of an anaglyphic stereo visualization application based on the AViz package to larger structures such as nanotubes. We will describe motivations and potential applications of these tools for answering interesting questions about nanotube properties.

  5. Movement-based estimation and visualization of space use in 3D for wildlife ecology and conservation

    USGS Publications Warehouse

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fu-Wen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research.

  6. Movement-Based Estimation and Visualization of Space Use in 3D for Wildlife Ecology and Conservation

    PubMed Central

    Tracey, Jeff A.; Sheppard, James; Zhu, Jun; Wei, Fuwen; Swaisgood, Ronald R.; Fisher, Robert N.

    2014-01-01

    Advances in digital biotelemetry technologies are enabling the collection of bigger and more accurate data on the movements of free-ranging wildlife in space and time. Although many biotelemetry devices record 3D location data with x, y, and z coordinates from tracked animals, the third z coordinate is typically not integrated into studies of animal spatial use. Disregarding the vertical component may seriously limit understanding of animal habitat use and niche separation. We present novel movement-based kernel density estimators and computer visualization tools for generating and exploring 3D home ranges based on location data. We use case studies of three wildlife species – giant panda, dugong, and California condor – to demonstrate the ecological insights and conservation management benefits provided by 3D home range estimation and visualization for terrestrial, aquatic, and avian wildlife research. PMID:24988114

  7. Genre Matters: A Comparative Study on the Entertainment Effects of 3D in Cinematic Contexts

    NASA Astrophysics Data System (ADS)

    Ji, Qihao; Lee, Young Sun

    2014-09-01

    Built upon prior comparative studies of 3D and 2D films, the current project investigates the effects of 2D and 3D on viewers' perception of enjoyment, narrative engagement, presence, involvement, and flow across three movie genres (Action/fantasy vs. Drama vs. Documentary). Through a 2 by 3 mixed factorial design, participants (n = 102) were separated into two viewing conditions (2D and 3D) and watched three 15-min film segments. Result suggested both visual production methods are equally efficient in terms of eliciting people's enjoyment, narrative engagement, involvement, flow and presence, no effects of visual production method was found. In addition, through examining the genre effects in both 3D and 2D conditions, we found that 3D works better for action movies than documentaries in terms of eliciting viewers' perception of enjoyment and presence, similarly, it improves views' narrative engagement for documentaries than dramas substantially. Implications and limitations are discussed in detail.

  8. 3D Data Mapping and Real-Time Experiment Control and Visualization in Brain Slices.

    PubMed

    Navarro, Marco A; Hibbard, Jaime V K; Miller, Michael E; Nivin, Tyler W; Milescu, Lorin S

    2015-10-20

    Here, we propose two basic concepts that can streamline electrophysiology and imaging experiments in brain slices and enhance data collection and analysis. The first idea is to interface the experiment with a software environment that provides a 3D scene viewer in which the experimental rig, the brain slice, and the recorded data are represented to scale. Within the 3D scene viewer, the user can visualize a live image of the sample and 3D renderings of the recording electrodes with real-time position feedback. Furthermore, the user can control the instruments and visualize their status in real time. The second idea is to integrate multiple types of experimental data into a spatial and temporal map of the brain slice. These data may include low-magnification maps of the entire brain slice, for spatial context, or any other type of high-resolution structural and functional image, together with time-resolved electrical and optical signals. The entire data collection can be visualized within the 3D scene viewer. These concepts can be applied to any other type of experiment in which high-resolution data are recorded within a larger sample at different spatial and temporal coordinates. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. 360-degree 3D transvaginal ultrasound system for high-dose-rate interstitial gynaecological brachytherapy needle guidance

    NASA Astrophysics Data System (ADS)

    Rodgers, Jessica R.; Surry, Kathleen; D'Souza, David; Leung, Eric; Fenster, Aaron

    2017-03-01

    Treatment for gynaecological cancers often includes brachytherapy; in particular, in high-dose-rate (HDR) interstitial brachytherapy, hollow needles are inserted into the tumour and surrounding area through a template in order to deliver the radiation dose. Currently, there is no standard modality for visualizing needles intra-operatively, despite the need for precise needle placement in order to deliver the optimal dose and avoid nearby organs, including the bladder and rectum. While three-dimensional (3D) transrectal ultrasound (TRUS) imaging has been proposed for 3D intra-operative needle guidance, anterior needles tend to be obscured by shadowing created by the template's vaginal cylinder. We have developed a 360-degree 3D transvaginal ultrasound (TVUS) system that uses a conventional two-dimensional side-fire TRUS probe rotated inside a hollow vaginal cylinder made from a sonolucent plastic (TPX). The system was validated using grid and sphere phantoms in order to test the geometric accuracy of the distance and volumetric measurements in the reconstructed image. To test the potential for visualizing needles, an agar phantom mimicking the geometry of the female pelvis was used. Needles were inserted into the phantom and then imaged using the 3D TVUS system. The needle trajectories and tip positions in the 3D TVUS scan were compared to their expected values and the needle tracks visualized in magnetic resonance images. Based on this initial study, 360-degree 3D TVUS imaging through a sonolucent vaginal cylinder is a feasible technique for intra-operatively visualizing needles during HDR interstitial gynaecological brachytherapy.

  10. [Visual perception abilities in children with reading disabilities].

    PubMed

    Werpup-Stüwe, Lina; Petermann, Franz

    2015-05-01

    Visual perceptual abilities are increasingly being neglected in research concerning reading disabilities. This study measures the visual perceptual abilities of children with disabilities in reading. The visual perceptual abilities of 35 children with specific reading disorder and 30 controls were compared using the German version of the Developmental Test of Visual Perception – Adolescent and Adult (DTVP-A). 11 % of the children with specific reading disorder show clinically relevant performance on the DTVP-A. The perceptual abilities of both groups differ significantly. No significant group differences exist after controlling for general IQ or Perceptional Reasoning Index, but they do remain after controlling for Verbal Comprehension, Working Memory, and Processing Speed Index. The number of children with reading difficulties suffering from visual perceptual disorders has been underestimated. For this reason, visual perceptual abilities should always be tested when making a reading disorder diagnosis. Profiles of IQ-test results of children suffering from reading and visual perceptual disorders should be interpreted carefully.

  11. Data Fusion and Visualization with the OpenEarth Framework (OEF)

    NASA Astrophysics Data System (ADS)

    Nadeau, D. R.; Baru, C.; Fouch, M. J.; Crosby, C. J.

    2010-12-01

    Data fusion is an increasingly important problem to solve as we strive to integrate data from multiple sources and build better models of the complex processes operating at the Earth’s surface and its interior. These data are often large, multi-dimensional, and subject to differing conventions for file formats, data structures, coordinate spaces, units of measure, and metadata organization. When visualized, these data require differing, and often conflicting, conventions for visual representations, dimensionality, icons, color schemes, labeling, and interaction. These issues make the visualization of fused Earth science data particularly difficult. The OpenEarth Framework (OEF) is an open-source data fusion and visualization suite of software being developed at the Supercomputer Center at the University of California, San Diego. Funded by the NSF, the project is leveraging virtual globe technology from NASA’s WorldWind to create interactive 3D visualization tools that combine layered data from a variety of sources to create a holistic view of features at, above, and beneath the Earth’s surface. The OEF architecture is cross-platform, multi-threaded, modular, and based upon Java. The OEF’s modular approach yields a collection of compatible mix-and-match components for assembling custom applications. Available modules support file format handling, web service communications, data management, data filtering, user interaction, and 3D visualization. File parsers handle a variety of formal and de facto standard file formats. Each one imports data into a general-purpose data representation that supports multidimensional grids, topography, points, lines, polygons, images, and more. From there these data then may be manipulated, merged, filtered, reprojected, and visualized. Visualization features support conventional and new visualization techniques for looking at topography, tomography, maps, and feature geometry. 3D grid data such as seismic tomography may be sliced by multiple oriented cutting planes and isosurfaced to create 3D skins that trace feature boundaries within the data. Topography may be overlaid with satellite imagery along with data such as gravity and magnetics measurements. Multiple data sets may be visualized simultaneously using overlapping layers and a common 3D+time coordinate space. Data management within the OEF handles and hides the quirks of differing file formats, web protocols, storage structures, coordinate spaces, and metadata representations. Derived data are computed automatically to support interaction and visualization while the original data is left unchanged in its original form. Data is cached for better memory and network efficiency, and all visualization is accelerated by 3D graphics hardware found on today’s computers. The OpenEarth Framework project is currently prototyping the software for use in the visualization, and integration of continental scale geophysical data being produced by EarthScope-related research in the Western US. The OEF is providing researchers with new ways to display and interrogate their data and is anticipated to be a valuable tool for future EarthScope-related research.

  12. 3DView: Space physics data visualizer

    NASA Astrophysics Data System (ADS)

    Génot, V.; Beigbeder, L.; Popescu, D.; Dufourg, N.; Gangloff, M.; Bouchemit, M.; Caussarieu, S.; Toniutti, J.-P.; Durand, J.; Modolo, R.; André, N.; Cecconi, B.; Jacquey, C.; Pitout, F.; Rouillard, A.; Pinto, R.; Erard, S.; Jourdane, N.; Leclercq, L.; Hess, S.; Khodachenko, M.; Al-Ubaidi, T.; Scherf, M.; Budnik, E.

    2018-04-01

    3DView creates visualizations of space physics data in their original 3D context. Time series, vectors, dynamic spectra, celestial body maps, magnetic field or flow lines, and 2D cuts in simulation cubes are among the variety of data representation enabled by 3DView. It offers direct connections to several large databases and uses VO standards; it also allows the user to upload data. 3DView's versatility covers a wide range of space physics contexts.

  13. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    ERIC Educational Resources Information Center

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  14. Correcting Distance Estimates by Interacting With Immersive Virtual Environments: Effects of Task and Available Sensory Information

    ERIC Educational Resources Information Center

    Waller, David; Richardson, Adam R.

    2008-01-01

    The tendency to underestimate egocentric distances in immersive virtual environments (VEs) is not well understood. However, previous research (A. R. Richardson & D. Waller, 2007) has demonstrated that a brief period of interaction with the VE prior to making distance judgments can effectively eliminate subsequent underestimation. Here the authors…

  15. Comparison of three-dimensional visualization techniques for depicting the scala vestibuli and scala tympani of the cochlea by using high-resolution MR imaging.

    PubMed

    Hans, P; Grant, A J; Laitt, R D; Ramsden, R T; Kassner, A; Jackson, A

    1999-08-01

    Cochlear implantation requires introduction of a stimulating electrode array into the scala vestibuli or scala tympani. Although these structures can be separately identified on many high-resolution scans, it is often difficult to ascertain whether these channels are patent throughout their length. The aim of this study was to determine whether an optimized combination of an imaging protocol and a visualization technique allows routine 3D rendering of the scala vestibuli and scala tympani. A submillimeter T2 fast spin-echo imaging sequence was designed to optimize the performance of 3D visualization methods. The spatial resolution was determined experimentally using primary images and 3D surface and volume renderings from eight healthy subjects. These data were used to develop the imaging sequence and to compare the quality and signal-to-noise dependency of four data visualization algorithms: maximum intensity projection, ray casting with transparent voxels, ray casting with opaque voxels, and isosurface rendering. The ability of these methods to produce 3D renderings of the scala tympani and scala vestibuli was also examined. The imaging technique was used in five patients with sensorineural deafness. Visualization techniques produced optimal results in combination with an isotropic volume imaging sequence. Clinicians preferred the isosurface-rendered images to other 3D visualizations. Both isosurface and ray casting displayed the scala vestibuli and scala tympani throughout their length. Abnormalities were shown in three patients, and in one of these, a focal occlusion of the scala tympani was confirmed at surgery. Three-dimensional images of the scala vestibuli and scala tympani can be routinely produced. The combination of an MR sequence optimized for use with isosurface rendering or ray-casting algorithms can produce 3D images with greater spatial resolution and anatomic detail than has been possible previously.

  16. Perceptualization of geometry using intelligent haptic and visual sensing

    NASA Astrophysics Data System (ADS)

    Weng, Jianguang; Zhang, Hui

    2013-01-01

    We present a set of paradigms for investigating geometric structures using haptic and visual sensing. Our principal test cases include smoothly embedded geometry shapes such as knotted curves embedded in 3D and knotted surfaces in 4D, that contain massive intersections when projected to one lower dimension. One can exploit a touch-responsive 3D interactive probe to haptically override this conflicting evidence in the rendered images, by forcing continuity in the haptic representation to emphasize the true topology. In our work, we exploited a predictive haptic guidance, a "computer-simulated hand" with supplementary force suggestion, to support intelligent exploration of geometry shapes that will smooth and maximize the probability of recognition. The cognitive load can be reduced further when enabling an attention-driven visual sensing during the haptic exploration. Our methods combine to reveal the full richness of the haptic exploration of geometric structures, and to overcome the limitations of traditional 4D visualization.

  17. Immersive Visual Data Analysis For Geoscience Using Commodity VR Hardware

    NASA Astrophysics Data System (ADS)

    Kreylos, O.; Kellogg, L. H.

    2017-12-01

    Immersive visualization using virtual reality (VR) display technology offers tremendous benefits for the visual analysis of complex three-dimensional data like those commonly obtained from geophysical and geological observations and models. Unlike "traditional" visualization, which has to project 3D data onto a 2D screen for display, VR can side-step this projection and display 3D data directly, in a pseudo-holographic (head-tracked stereoscopic) form, and does therefore not suffer the distortions of relative positions, sizes, distances, and angles that are inherent in 2D projection. As a result, researchers can apply their spatial reasoning skills to virtual data in the same way they can to real objects or environments. The UC Davis W.M. Keck Center for Active Visualization in the Earth Sciences (KeckCAVES, http://keckcaves.org) has been developing VR methods for data analysis since 2005, but the high cost of VR displays has been preventing large-scale deployment and adoption of KeckCAVES technology. The recent emergence of high-quality commodity VR, spearheaded by the Oculus Rift and HTC Vive, has fundamentally changed the field. With KeckCAVES' foundational VR operating system, Vrui, now running natively on the HTC Vive, all KeckCAVES visualization software, including 3D Visualizer, LiDAR Viewer, Crusta, Nanotech Construction Kit, and ProtoShop, are now available to small labs, single researchers, and even home users. LiDAR Viewer and Crusta have been used for rapid response to geologic events including earthquakes and landslides, to visualize the impacts of sealevel rise, to investigate reconstructed paleooceanographic masses, and for exploration of the surface of Mars. The Nanotech Construction Kit is being used to explore the phases of carbon in Earth's deep interior, while ProtoShop can be used to construct and investigate protein structures.

  18. Trans3D: a free tool for dynamical visualization of EEG activity transmission in the brain.

    PubMed

    Blinowski, Grzegorz; Kamiński, Maciej; Wawer, Dariusz

    2014-08-01

    The problem of functional connectivity in the brain is in the focus of attention nowadays, since it is crucial for understanding information processing in the brain. A large repertoire of measures of connectivity have been devised, some of them being capable of estimating time-varying directed connectivity. Hence, there is a need for a dedicated software tool for visualizing the propagation of electrical activity in the brain. To this aim, the Trans3D application was developed. It is an open access tool based on widely available libraries and supporting both Windows XP/Vista/7(™), Linux and Mac environments. Trans3D can create animations of activity propagation between electrodes/sensors, which can be placed by the user on the scalp/cortex of a 3D model of the head. Various interactive graphic functions for manipulating and visualizing components of the 3D model and input data are available. An application of the Trans3D tool has helped to elucidate the dynamics of the phenomena of information processing in motor and cognitive tasks, which otherwise would have been very difficult to observe. Trans3D is available at: http://www.eeg.pl/. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. High-resolution gadolinium-enhanced 3D MRA of the infrapopliteal arteries. Lessons for improving bolus-chase peripheral MRA.

    PubMed

    Hood, Maureen N; Ho, Vincent B; Foo, Thomas K F; Marcos, Hani B; Hess, Sandra L; Choyke, Peter L

    2002-09-01

    Peripheral magnetic resonance angiography (MRA) is growing in use. However, methods of performing peripheral MRA vary widely and continue to be optimized, especially for improvement in illustration of infrapopliteal arteries. The main purpose of this project was to identify imaging factors that can improve arterial visualization in the lower leg using bolus chase peripheral MRA. Eighteen healthy adults were imaged on a 1.5T MR scanner. The calf was imaged using conventional three-station bolus chase three-dimensional (3D) MRA, two dimensional (2D) time-of-flight (TOF) MRA and single-station Gadolinium (Gd)-enhanced 3D MRA. Observer comparisons of vessel visualization, signal to noise ratios (SNR), contrast to noise ratios (CNR) and spatial resolution comparisons were performed. Arterial SNR and CNR were similar for all three techniques. However, arterial visualization was dramatically improved on dedicated, arterial-phase Gd-enhanced 3D MRA compared with the multi-station bolus chase MRA and 2D TOF MRA. This improvement was related to optimization of Gd-enhanced 3D MRA parameters (fast injection rate of 2 mL/sec, high spatial resolution imaging, the use of dedicated phased array coils, elliptical centric k-space sampling and accurate arterial phase timing for image acquisition). The visualization of the infrapopliteal arteries can be substantially improved in bolus chase peripheral MRA if voxel size, contrast delivery, and central k-space data acquisition for arterial enhancement are optimized. Improvements in peripheral MRA should be directed at these parameters.

  20. Discovering new methods of data fusion, visualization, and analysis in 3D immersive environments for hyperspectral and laser altimetry data

    NASA Astrophysics Data System (ADS)

    Moore, C. A.; Gertman, V.; Olsoy, P.; Mitchell, J.; Glenn, N. F.; Joshi, A.; Norpchen, D.; Shrestha, R.; Pernice, M.; Spaete, L.; Grover, S.; Whiting, E.; Lee, R.

    2011-12-01

    Immersive virtual reality environments such as the IQ-Station or CAVE° (Cave Automated Virtual Environment) offer new and exciting ways to visualize and explore scientific data and are powerful research and educational tools. Combining remote sensing data from a range of sensor platforms in immersive 3D environments can enhance the spectral, textural, spatial, and temporal attributes of the data, which enables scientists to interact and analyze the data in ways never before possible. Visualization and analysis of large remote sensing datasets in immersive environments requires software customization for integrating LiDAR point cloud data with hyperspectral raster imagery, the generation of quantitative tools for multidimensional analysis, and the development of methods to capture 3D visualizations for stereographic playback. This study uses hyperspectral and LiDAR data acquired over the China Hat geologic study area near Soda Springs, Idaho, USA. The data are fused into a 3D image cube for interactive data exploration and several methods of recording and playback are investigated that include: 1) creating and implementing a Virtual Reality User Interface (VRUI) patch configuration file to enable recording and playback of VRUI interactive sessions within the CAVE and 2) using the LiDAR and hyperspectral remote sensing data and GIS data to create an ArcScene 3D animated flyover, where left- and right-eye visuals are captured from two independent monitors for playback in a stereoscopic player. These visualizations can be used as outreach tools to demonstrate how integrated data and geotechnology techniques can help scientists see, explore, and more adequately comprehend scientific phenomena, both real and abstract.

  1. Investigating the role of chemical and physical processes on organic aerosol modelling with CAMx in the Po Valley during a winter episode

    NASA Astrophysics Data System (ADS)

    Meroni, A.; Pirovano, G.; Gilardoni, S.; Lonati, G.; Colombi, C.; Gianelle, V.; Paglione, M.; Poluzzi, V.; Riva, G. M.; Toppetti, A.

    2017-12-01

    Traditional aerosol mechanisms underestimate the observed organic aerosol concentration, especially due to the lack of information on secondary organic aerosol (SOA) formation and processing. In this study we evaluate the chemical and transport model CAMx during a one-month in winter (February 2013) over a 5 km resolution domain, covering the whole Po valley (Northern Italy). This works aims at investigating the effects of chemical and physical atmospheric processing on modelling results and, in particular, to evaluate the CAMx sensitivity to organic aerosol (OA) modelling schemes: we will compare the recent 1.5D-VBS algorithm (CAMx-VBS) with the traditional Odum 2-product model (CAMx-SOAP). Additionally, the thorough diagnostic analysis of the reproduction of meteorology, precursors and aerosol components was intended to point put strength and weaknesses of the modelling system and address its improvement. Firstly, we evaluate model performance for criteria PM concentration. PM10 concentration was underestimated both by CAMx-SOAP and even more by CAMx-VBS, with the latter showing a bias ranging between -4.7 and -7.1 μg m-3. PM2.5 model performance was to some extent better than PM10, showing a mean bias ranging between -0.5 μg m-3 at rural sites and -5.5 μg m-3 at urban and suburban sites. CAMx performance for OA was clearly worse than for the other PM compounds (negative bias ranging between -40% and -75%). The comparisons of model results with OA sources (identified by PMF analysis) shows that the VBS scheme underestimates freshly emitted organic aerosol while SOAP overestimates. The VBS scheme correctly reproduces biomass burning (BBOA) contributions to primary OA concentrations (POA). In contrast VBS slightly underestimates the contribution from fossil-fuel combustion (HOA), indicating that POA emissions related to road transport are either underestimated or associated to higher volatility classes. The VBS scheme under-predictes the SOA too, but to a lesser extent than CAMx-SOAP. SOA underestimation can be related to corresponding underestimation of either aging processes or precursor emissions. This indicates that improvements in the emission inventories for semi- and intermediate-volatility organic compounds are needed for further progress in this area. Finally, the comparison between modelled and observed SOA sources points out the urgency to include processing of OA in particle water phase into SOA formation mechanisms, to reconcile model results and observations.

  2. Visualizing Terrestrial and Aquatic Systems in 3D

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  3. Advanced Data Visualization in Astrophysics: The X3D Pathway

    NASA Astrophysics Data System (ADS)

    Vogt, Frédéric P. A.; Owen, Chris I.; Verdes-Montenegro, Lourdes; Borthakur, Sanchayeeta

    2016-02-01

    Most modern astrophysical data sets are multi-dimensional; a characteristic that can nowadays generally be conserved and exploited scientifically during the data reduction/simulation and analysis cascades. However, the same multi-dimensional data sets are systematically cropped, sliced, and/or projected to printable two-dimensional diagrams at the publication stage. In this article, we introduce the concept of the “X3D pathway” as a mean of simplifying and easing the access to data visualization and publication via three-dimensional (3D) diagrams. The X3D pathway exploits the facts that (1) the X3D 3D file format lies at the center of a product tree that includes interactive HTML documents, 3D printing, and high-end animations, and (2) all high-impact-factor and peer-reviewed journals in astrophysics are now published (some exclusively) online. We argue that the X3D standard is an ideal vector for sharing multi-dimensional data sets because it provides direct access to a range of different data visualization techniques, is fully open source, and is a well-defined standard from the International Organization for Standardization. Unlike other earlier propositions to publish multi-dimensional data sets via 3D diagrams, the X3D pathway is not tied to specific software (prone to rapid and unexpected evolution), but instead is compatible with a range of open-source software already in use by our community. The interactive HTML branch of the X3D pathway is also actively supported by leading peer-reviewed journals in the field of astrophysics. Finally, this article provides interested readers with a detailed set of practical astrophysical examples designed to act as a stepping stone toward the implementation of the X3D pathway for any other data set.

  4. Novel Visualization of Large Health Related Data Sets

    DTIC Science & Technology

    2015-03-01

    Health Record Data: A Systematic Review B: McPeek Hinz E, Borland D, Shah H, West V, Hammond WE. Temporal Visualization of Diabetes Mellitus via Hemoglobin ...H, Borland D, McPeek Hinz E, West V, Hammond WE. Demonstration of Temporal Visualization of Diabetes Mellitus via Hemoglobin A1C Levels E... Hemoglobin A1c Levels and MultivariateVisualization of System-Wide National Health Service Data Using Radial Coordinates. (Copies in Appendix) 4.3

  5. Blindness and partial sight in an elderly population.

    PubMed Central

    Gibson, J M; Lavery, J R; Rosenthal, A R

    1986-01-01

    A cross sectional, prevalence survey of eye disease in the population over 75 years old of Melton Mowbray has been used to examine the accuracy and completeness of the Blind and Partially Sighted Registers. The Blind Register had high sensitivity and specificity but was found to underestimate the prevalence of blindness by a factor of 1.1. The Partially Sighted Register had high specificity, but the sensitivity was only 50% and it underestimated the prevalence of partial sight by a factor of 1.5. Seven persons eligible for registration, but previously not registered, were found, two as blind and five as partially sighted. This represented 21% of the registrable visually impaired population. PMID:3756128

  6. Visual fatigue modeling for stereoscopic video shot based on camera motion

    NASA Astrophysics Data System (ADS)

    Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing

    2014-11-01

    As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.

  7. Visual Search Load Effects on Age-Related Cognitive Decline: Evidence From the Yakumo Longitudinal Study.

    PubMed

    Hatta, Takeshi; Kato, Kimiko; Hotta, Chie; Higashikawa, Mari; Iwahara, Akihiko; Hatta, Taketoshi; Hatta, Junko; Fujiwara, Kazumi; Nagahara, Naoko; Ito, Emi; Hamajima, Nobuyuki

    2017-01-01

    The validity of Bucur and Madden's (2010) proposal that an age-related decline is particularly pronounced in executive function measures rather than in elementary perceptual speed measures was examined via the Yakumo Study longitudinal database. Their proposal suggests that cognitive load differentially affects cognitive abilities in older adults. To address their proposal, linear regression coefficients of 104 participants were calculated individually for the digit cancellation task 1 (D-CAT1), where participants search for a given single digit, and the D-CAT3, where they search for 3 digits simultaneously. Therefore, it can be conjectured that the D-CAT1 represents primarily elementary perceptual speed and low-visual search load task. whereas the D-CAT3 represents primarily executive function and high-visual search load task. Regression coefficients from age 65 to 75 for the D-CAT3 showed a significantly steeper decline than that for the D-CAT1, and a large number of participants showed this tendency. These results support the proposal by Brcur and Madden (2010) and suggest that the degree of cognitive load affects age-related cognitive decline.

  8. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  9. From Vesalius to Virtual Reality: How Embodied Cognition Facilitates the Visualization of Anatomy

    ERIC Educational Resources Information Center

    Jang, Susan

    2010-01-01

    This study examines the facilitative effects of embodiment of a complex internal anatomical structure through three-dimensional ("3-D") interactivity in a virtual reality ("VR") program. Since Shepard and Metzler's influential 1971 study, it has been known that 3-D objects (e.g., multiple-armed cube or external body parts) are visually and…

  10. Proteopedia: 3D Visualization and Annotation of Transcription Factor-DNA Readout Modes

    ERIC Educational Resources Information Center

    Dantas Machado, Ana Carolina; Saleebyan, Skyler B.; Holmes, Bailey T.; Karelina, Maria; Tam, Julia; Kim, Sharon Y.; Kim, Keziah H.; Dror, Iris; Hodis, Eran; Martz, Eric; Compeau, Patricia A.; Rohs, Remo

    2012-01-01

    3D visualization assists in identifying diverse mechanisms of protein-DNA recognition that can be observed for transcription factors and other DNA binding proteins. We used Proteopedia to illustrate transcription factor-DNA readout modes with a focus on DNA shape, which can be a function of either nucleotide sequence (Hox proteins) or base pairing…

  11. Web-Based Interactive 3D Visualization as a Tool for Improved Anatomy Learning

    ERIC Educational Resources Information Center

    Petersson, Helge; Sinkvist, David; Wang, Chunliang; Smedby, Orjan

    2009-01-01

    Despite a long tradition, conventional anatomy education based on dissection is declining. This study tested a new virtual reality (VR) technique for anatomy learning based on virtual contrast injection. The aim was to assess whether students value this new three-dimensional (3D) visualization method as a learning tool and what value they gain…

  12. Visualization of postoperative anterior cruciate ligament reconstruction bone tunnels

    PubMed Central

    2011-01-01

    Background and purpose Non-anatomic bone tunnel placement is the most common cause of a failed ACL reconstruction. Accurate and reproducible methods to visualize and document bone tunnel placement are therefore important. We evaluated the reliability of standard radiographs, CT scans, and a 3-dimensional (3D) virtual reality (VR) approach in visualizing and measuring ACL reconstruction bone tunnel placement. Methods 50 consecutive patients who underwent single-bundle ACL reconstructions were evaluated postoperatively by standard radiographs, CT scans, and 3D VR images. Tibial and femoral tunnel positions were measured by 2 observers using the traditional methods of Amis, Aglietti, Hoser, Stäubli, and the method of Benereau for the VR approach. Results The tunnel was visualized in 50–82% of the standard radiographs and in 100% of the CT scans and 3D VR images. Using the intraclass correlation coefficient (ICC), the inter- and intraobserver agreement was between 0.39 and 0.83 for the standard femoral and tibial radiographs. CT scans showed an ICC range of 0.49–0.76 for the inter- and intraobserver agreement. The agreement in 3D VR was almost perfect, with an ICC of 0.83 for the femur and 0.95 for the tibia. Interpretation CT scans and 3D VR images are more reliable in assessing postoperative bone tunnel placement following ACL reconstruction than standard radiographs. PMID:21999625

  13. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer

    PubMed Central

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources. PMID:18974802

  14. Visualization of Documents and Concepts in Neuroinformatics with the 3D-SE Viewer.

    PubMed

    Naud, Antoine; Usui, Shiro; Ueda, Naonori; Taniguchi, Tatsuki

    2007-01-01

    A new interactive visualization tool is proposed for mining text data from various fields of neuroscience. Applications to several text datasets are presented to demonstrate the capability of the proposed interactive tool to visualize complex relationships between pairs of lexical entities (with some semantic contents) such as terms, keywords, posters, or papers' abstracts. Implemented as a Java applet, this tool is based on the spherical embedding (SE) algorithm, which was designed for the visualization of bipartite graphs. Items such as words and documents are linked on the basis of occurrence relationships, which can be represented in a bipartite graph. These items are visualized by embedding the vertices of the bipartite graph on spheres in a three-dimensional (3-D) space. The main advantage of the proposed visualization tool is that 3-D layouts can convey more information than planar or linear displays of items or graphs. Different kinds of information extracted from texts, such as keywords, indexing terms, or topics are visualized, allowing interactive browsing of various fields of research featured by keywords, topics, or research teams. A typical use of the 3D-SE viewer is quick browsing of topics displayed on a sphere, then selecting one or several item(s) displays links to related terms on another sphere representing, e.g., documents or abstracts, and provides direct online access to the document source in a database, such as the Visiome Platform or the SfN Annual Meeting. Developed as a Java applet, it operates as a tool on top of existing resources.

  15. SCEC-VDO: A New 3-Dimensional Visualization and Movie Making Software for Earth Science Data

    NASA Astrophysics Data System (ADS)

    Milner, K. R.; Sanskriti, F.; Yu, J.; Callaghan, S.; Maechling, P. J.; Jordan, T. H.

    2016-12-01

    Researchers and undergraduate interns at the Southern California Earthquake Center (SCEC) have created a new 3-dimensional (3D) visualization software tool called SCEC Virtual Display of Objects (SCEC-VDO). SCEC-VDO is written in Java and uses the Visualization Toolkit (VTK) backend to render 3D content. SCEC-VDO offers advantages over existing 3D visualization software for viewing georeferenced data beneath the Earth's surface. Many popular visualization packages, such as Google Earth, restrict the user to views of the Earth from above, obstructing views of geological features such as faults and earthquake hypocenters at depth. SCEC-VDO allows the user to view data both above and below the Earth's surface at any angle. It includes tools for viewing global earthquakes from the U.S. Geological Survey, faults from the SCEC Community Fault Model, and results from the latest SCEC models of earthquake hazards in California including UCERF3 and RSQSim. Its object-oriented plugin architecture allows for the easy integration of new regional and global datasets, regardless of the science domain. SCEC-VDO also features rich animation capabilities, allowing users to build a timeline with keyframes of camera position and displayed data. The software is built with the concept of statefulness, allowing for reproducibility and collaboration using an xml file. A prior version of SCEC-VDO, which began development in 2005 under the SCEC Undergraduate Studies in Earthquake Information Technology internship, used the now unsupported Java3D library. Replacing Java3D with the widely supported and actively developed VTK libraries not only ensures that SCEC-VDO can continue to function for years to come, but allows for the export of 3D scenes to web viewers and popular software such as Paraview. SCEC-VDO runs on all recent 64-bit Windows, Mac OS X, and Linux systems with Java 8 or later. More information, including downloads, tutorials, and example movies created fully within SCEC-VDO is available here: http://scecvdo.usc.edu

  16. Radiological assessment of breast density by visual classification (BI-RADS) compared to automated volumetric digital software (Quantra): implications for clinical practice.

    PubMed

    Regini, Elisa; Mariscotti, Giovanna; Durando, Manuela; Ghione, Gianluca; Luparia, Andrea; Campanino, Pier Paolo; Bianchi, Caterina Chiara; Bergamasco, Laura; Fonio, Paolo; Gandini, Giovanni

    2014-10-01

    This study was done to assess breast density on digital mammography and digital breast tomosynthesis according to the visual Breast Imaging Reporting and Data System (BI-RADS) classification, to compare visual assessment with Quantra software for automated density measurement, and to establish the role of the software in clinical practice. We analysed 200 digital mammograms performed in 2D and 3D modality, 100 of which positive for breast cancer and 100 negative. Radiological density was assessed with the BI-RADS classification; a Quantra density cut-off value was sought on the 2D images only to discriminate between BI-RADS categories 1-2 and BI-RADS 3-4. Breast density was correlated with age, use of hormone therapy, and increased risk of disease. The agreement between the 2D and 3D assessments of BI-RADS density was high (K 0.96). A cut-off value of 21% is that which allows us to best discriminate between BI-RADS categories 1-2 and 3-4. Breast density was negatively correlated to age (r = -0.44) and positively to use of hormone therapy (p = 0.0004). Quantra density was higher in breasts with cancer than in healthy breasts. There is no clear difference between the visual assessments of density on 2D and 3D images. Use of the automated system requires the adoption of a cut-off value (set at 21%) to effectively discriminate BI-RADS 1-2 and 3-4, and could be useful in clinical practice.

  17. Evaluation of methods to assess physical activity in free-living conditions.

    PubMed

    Leenders, N Y; Sherman, W M; Nagaraja, H N; Kien, C L

    2001-07-01

    The purpose of this study was to compare different methods of measuring physical activity (PA) in women by the doubly labeled water method (DLW). Thirteen subjects participated in a 7-d protocol during which total daily energy expenditure (TDEE) was measured with DLW. Body composition, basal metabolic rate (BMR), and peak oxygen consumption were also measured. Physical activity-related energy expenditure (PAEE) was then calculated by subtracting measured BMR and the estimated thermic effect of food from TDEE. Simultaneously, over the 7 d, PA was assessed via a 7-d Physical Activity Recall questionnaire (PAR), and subjects wore secured at the waist, a Tritrac-R3D (Madison, WI), a Computer Science Application Inc. activity monitor (CSA; Shalimar, FL), and a Yamax Digi Walker-500 (Tokyo, Japan). Pearson-product moment correlations were calculated to determine the relationships among the different methods for estimating PAEE. Paired t-tests with appropriate adjustments were used to compare the different methods with DLW-PAEE. There was no significant difference between PAEE determined from PAR and DLW. The differences between the two methods ranged from -633 to 280 kcal.d(-1). Compared with DLW, PAEE determined from CSA, Tritrac, and Yamax was significantly underestimated by 59% (-495 kcal.d(-1)), 35% (-320 kcal.d(-1)) and 59% (-497 kcal.d(-1)), respectively. VO2peak explained 43% of the variation in DLW-PAEE. Although the group average for PAR-PAEE agreed with DLW-PAEE, there were differences in the methods among the subjects. PAEE determined by Tritrac, CSA, and Yamax significantly underestimate free-living PAEE in women.

  18. Cytoscape tools for the web age: D3.js and Cytoscape.js exporters

    PubMed Central

    Ono, Keiichiro; Demchak, Barry; Ideker, Trey

    2014-01-01

    In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them. PMID:25520778

  19. Cytoscape tools for the web age: D3.js and Cytoscape.js exporters.

    PubMed

    Ono, Keiichiro; Demchak, Barry; Ideker, Trey

    2014-01-01

    In this paper we present new data export modules for Cytoscape 3 that can generate network files for Cytoscape.js and D3.js. Cytoscape.js exporter is implemented as a core feature of Cytoscape 3, and D3.js exporter is available as a Cytoscape 3 app. These modules enable users to seamlessly export network and table data sets generated in Cytoscape to popular JavaScript library readable formats. In addition, we implemented template web applications for browser-based interactive network visualization that can be used as basis for complex data visualization applications for bioinformatics research. Example web applications created with these tools demonstrate how Cytoscape works in modern data visualization workflows built with traditional desktop tools and emerging web-based technologies. This interactivity enables researchers more flexibility than with static images, thereby greatly improving the quality of insights researchers can gain from them.

  20. Development of a new software for analyzing 3-D fracture network

    NASA Astrophysics Data System (ADS)

    Um, Jeong-Gi; Noh, Young-Hwan; Choi, Yosoon

    2014-05-01

    A new software is presented to analyze fracture network in 3-D. Recently, we completed the software package based on information given in EGU2013. The software consists of several modules that play roles in management of borehole data, stochastic modelling of fracture network, construction of analysis domain, visualization of fracture geometry in 3-D, calculation of equivalent pipes and production of cross-section diagrams. Intel Parallel Studio XE 2013, Visual Studio.NET 2010 and the open source VTK library were utilized as development tools to efficiently implement the modules and the graphical user interface of the software. A case study was performed to analyze 3-D fracture network system at the Upper Devonian Grosmont Formation in Alberta, Canada. The results have suggested that the developed software is effective in modelling and visualizing 3-D fracture network system, and can provide useful information to tackle the geomechanical problems related to strength, deformability and hydraulic behaviours of the fractured rock masses. This presentation describes the concept and details of the development and implementation of the software.

  1. Augmented reality three-dimensional object visualization and recognition with axially distributed sensing.

    PubMed

    Markman, Adam; Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-01-15

    An augmented reality (AR) smartglass display combines real-world scenes with digital information enabling the rapid growth of AR-based applications. We present an augmented reality-based approach for three-dimensional (3D) optical visualization and object recognition using axially distributed sensing (ADS). For object recognition, the 3D scene is reconstructed, and feature extraction is performed by calculating the histogram of oriented gradients (HOG) of a sliding window. A support vector machine (SVM) is then used for classification. Once an object has been identified, the 3D reconstructed scene with the detected object is optically displayed in the smartglasses allowing the user to see the object, remove partial occlusions of the object, and provide critical information about the object such as 3D coordinates, which are not possible with conventional AR devices. To the best of our knowledge, this is the first report on combining axially distributed sensing with 3D object visualization and recognition for applications to augmented reality. The proposed approach can have benefits for many applications, including medical, military, transportation, and manufacturing.

  2. Cognitive Aspects of Collaboration in 3d Virtual Environments

    NASA Astrophysics Data System (ADS)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  3. Surgical planning for radical prostatectomies using three-dimensional visualization and a virtual reality display system

    NASA Astrophysics Data System (ADS)

    Kay, Paul A.; Robb, Richard A.; King, Bernard F.; Myers, R. P.; Camp, Jon J.

    1995-04-01

    Thousands of radical prostatectomies for prostate cancer are performed each year. Radical prostatectomy is a challenging procedure due to anatomical variability and the adjacency of critical structures, including the external urinary sphincter and neurovascular bundles that subserve erectile function. Because of this, there are significant risks of urinary incontinence and impotence following this procedure. Preoperative interaction with three-dimensional visualization of the important anatomical structures might allow the surgeon to understand important individual anatomical relationships of patients. Such understanding might decrease the rate of morbidities, especially for surgeons in training. Patient specific anatomic data can be obtained from preoperative 3D MRI diagnostic imaging examinations of the prostate gland utilizing endorectal coils and phased array multicoils. The volumes of the important structures can then be segmented using interactive image editing tools and then displayed using 3-D surface rendering algorithms on standard work stations. Anatomic relationships can be visualized using surface displays and 3-D colorwash and transparency to allow internal visualization of hidden structures. Preoperatively a surgeon and radiologist can interactively manipulate the 3-D visualizations. Important anatomical relationships can better be visualized and used to plan the surgery. Postoperatively the 3-D displays can be compared to actual surgical experience and pathologic data. Patients can then be followed to assess the incidence of morbidities. More advanced approaches to visualize these anatomical structures in support of surgical planning will be implemented on virtual reality (VR) display systems. Such realistic displays are `immersive,' and allow surgeons to simultaneously see and manipulate the anatomy, to plan the procedure and to rehearse it in a realistic way. Ultimately the VR systems will be implemented in the operating room (OR) to assist the surgeon in conducting the surgery. Such an implementation will bring to the OR all of the pre-surgical planning data and rehearsal experience in synchrony with the actual patient and operation to optimize the effectiveness and outcome of the procedure.

  4. Estimation of 3D shape from image orientations.

    PubMed

    Fleming, Roland W; Holtmann-Rice, Daniel; Bülthoff, Heinrich H

    2011-12-20

    One of the main functions of vision is to estimate the 3D shape of objects in our environment. Many different visual cues, such as stereopsis, motion parallax, and shading, are thought to be involved. One important cue that remains poorly understood comes from surface texture markings. When a textured surface is slanted in 3D relative to the observer, the surface patterns appear compressed in the retinal image, providing potentially important information about 3D shape. What is not known, however, is how the brain actually measures this information from the retinal image. Here, we explain how the key information could be extracted by populations of cells tuned to different orientations and spatial frequencies, like those found in the primary visual cortex. To test this theory, we created stimuli that selectively stimulate such cell populations, by "smearing" (filtering) images of 2D random noise into specific oriented patterns. We find that the resulting patterns appear vividly 3D, and that increasing the strength of the orientation signals progressively increases the sense of 3D shape, even though the filtering we apply is physically inconsistent with what would occur with a real object. This finding suggests we have isolated key mechanisms used by the brain to estimate shape from texture. Crucially, we also find that adapting the visual system's orientation detectors to orthogonal patterns causes unoriented random noise to look like a specific 3D shape. Together these findings demonstrate a crucial role of orientation detectors in the perception of 3D shape.

  5. Exploring the Synergies between the Object Oriented Paradigm and Mathematics: A Java Led Approach

    ERIC Educational Resources Information Center

    Conrad, Marc; French, Tim

    2004-01-01

    While the object oriented paradigm and its instantiation within programming languages such as Java has become a ubiquitous part of both the commercial and educational landscapes, its usage as a visualization technique within mathematics undergraduate programmes of study has perhaps been somewhat underestimated. By regarding the object oriented…

  6. Clinical evaluation of a new pupil independent diffractive multifocal intraocular lens with a +2.75 D near addition: a European multicentre study.

    PubMed

    Kretz, Florian T A; Gerl, Matthias; Gerl, Ralf; Müller, Matthias; Auffarth, Gerd U

    2015-12-01

    To evaluate the clinical outcomes after cataract surgery with implantation of a new diffractive multifocal intraocular lens (IOL) with a lower near addition (+2.75 D.). 143 eyes of 85 patients aged between 40 years and 83 years that underwent cataract surgery with implantation of the multifocal IOL (MIOL) Tecnis ZKB00 (Abbott Medical Optics,Santa Ana, California, USA) were evaluated. Changes in uncorrected (uncorrected distance visual acuity, uncorrected intermediate visual acuity, uncorrected near visual acuity) and corrected (corrected distance visual acuity, corrected near visual acuity) logMAR distance, intermediate visual acuity and near visual acuity, as well as manifest refraction were evaluated during a 3-month follow-up. Additionally, patients were asked about photic phenomena and spectacle dependence. Postoperative spherical equivalent was within ±0.50 D and ±1.00 D of emmetropia in 78.1% and 98.4% of eyes, respectively. Postoperative mean monocular uncorrected distance visual acuity, uncorrected near visual acuity and uncorrected intermediate visual acuity was 0.20 LogMAR or better in 73.7%, 81.1% and 83.9% of eyes, respectively. All eyes achieved monocular corrected distance visual acuity of 0.30 LogMAR or better. A total of 100% of patients referred to be at least moderately happy with the outcomes of the surgery. Only 15.3% of patients required the use of spectacles for some daily activities postoperatively. The introduction of low add MIOLs follows a trend to increase intermediate visual acuity. In this study a near add of +2.75 D still reaches satisfying near results and leads to high patient satisfaction for intermediate visual acuity. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. A novel three-dimensional tool for teaching human neuroanatomy.

    PubMed

    Estevez, Maureen E; Lindgren, Kristen A; Bergethon, Peter R

    2010-01-01

    Three-dimensional (3D) visualization of neuroanatomy can be challenging for medical students. This knowledge is essential in order for students to correlate cross-sectional neuroanatomy and whole brain specimens within neuroscience curricula and to interpret clinical and radiological information as clinicians or researchers. This study implemented and evaluated a new tool for teaching 3D neuroanatomy to first-year medical students at Boston University School of Medicine. Students were randomized into experimental and control classrooms. All students were taught neuroanatomy according to traditional 2D methods. Then, during laboratory review, the experimental group constructed 3D color-coded physical models of the periventricular structures, while the control group re-examined 2D brain cross-sections. At the end of the course, 2D and 3D spatial relationships of the brain and preferred learning styles were assessed in both groups. The overall quiz scores for the experimental group were significantly higher than the control group (t(85) = 2.02, P < 0.05). However, when the questions were divided into those requiring either 2D or 3D visualization, only the scores for the 3D questions were significantly higher in the experimental group (F₁(,)₈₅ = 5.48, P = 0.02). When surveyed, 84% of students recommended repeating the 3D activity for future laboratories, and this preference was equally distributed across preferred learning styles (χ² = 0.14, n.s.). Our results suggest that our 3D physical modeling activity is an effective method for teaching spatial relationships of brain anatomy and will better prepare students for visualization of 3D neuroanatomy, a skill essential for higher education in neuroscience, neurology, and neurosurgery. Copyright © 2010 American Association of Anatomists.

  8. A Novel Three-Dimensional Tool for Teaching Human Neuroanatomy

    PubMed Central

    Estevez, Maureen E.; Lindgren, Kristen A.; Bergethon, Peter R.

    2011-01-01

    Three-dimensional (3-D) visualization of neuroanatomy can be challenging for medical students. This knowledge is essential in order for students to correlate cross-sectional neuroanatomy and whole brain specimens within neuroscience curricula and to interpret clinical and radiological information as clinicians or researchers. This study implemented and evaluated a new tool for teaching 3-D neuroanatomy to first-year medical students at Boston University School of Medicine. Students were randomized into experimental and control classrooms. All students were taught neuroanatomy according to traditional 2-D methods. Then, during laboratory review, the experimental group constructed 3-D color-coded physical models of the periventricular structures, while the control group re-examined 2-D brain cross-sections. At the end of the course, 2-D and 3-D spatial relationships of the brain and preferred learning styles were assessed in both groups. The overall quiz scores for the experimental group were significantly higher than the control group (t(85) = 2.02, P < 0.05). However, when the questions were divided into those requiring either 2-D or 3-D visualization, only the scores for the 3-D questions were significantly higher in the experimental group (F1,85 = 5.48, P = 0.02). When surveyed, 84% of students recommended repeating the 3-D activity for future laboratories, and this preference was equally distributed across preferred learning styles (χ2 = 0.14, n.s.). Our results suggest that our 3-D physical modeling activity is an effective method for teaching spatial relationships of brain anatomy and will better prepare students for visualization of 3-D neuroanatomy, a skill essential for higher education in neuroscience, neurology, and neurosurgery. PMID:20939033

  9. Evaluation of low-dose limits in 3D-2D rigid registration for surgical guidance

    NASA Astrophysics Data System (ADS)

    Uneri, A.; Wang, A. S.; Otake, Y.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Gallia, G. L.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2014-09-01

    An algorithm for intensity-based 3D-2D registration of CT and C-arm fluoroscopy is evaluated for use in surgical guidance, specifically considering the low-dose limits of the fluoroscopic x-ray projections. The registration method is based on a framework using the covariance matrix adaptation evolution strategy (CMA-ES) to identify the 3D patient pose that maximizes the gradient information similarity metric. Registration performance was evaluated in an anthropomorphic head phantom emulating intracranial neurosurgery, using target registration error (TRE) to characterize accuracy and robustness in terms of 95% confidence upper bound in comparison to that of an infrared surgical tracking system. Three clinical scenarios were considered: (1) single-view image + guidance, wherein a single x-ray projection is used for visualization and 3D-2D guidance; (2) dual-view image + guidance, wherein one projection is acquired for visualization, combined with a second (lower-dose) projection acquired at a different C-arm angle for 3D-2D guidance; and (3) dual-view guidance, wherein both projections are acquired at low dose for the purpose of 3D-2D guidance alone (not visualization). In each case, registration accuracy was evaluated as a function of the entrance surface dose associated with the projection view(s). Results indicate that images acquired at a dose as low as 4 μGy (approximately one-tenth the dose of a typical fluoroscopic frame) were sufficient to provide TRE comparable or superior to that of conventional surgical tracking, allowing 3D-2D guidance at a level of dose that is at most 10% greater than conventional fluoroscopy (scenario #2) and potentially reducing the dose to approximately 20% of the level in a conventional fluoroscopically guided procedure (scenario #3).

  10. Emotional and behavioural problems in children with visual impairment, intellectual and multiple disabilities.

    PubMed

    Alimovic, S

    2013-02-01

    Children with multiple impairments have more complex developmental problems than children with a single impairment. We compared children, aged 4 to 11 years, with intellectual disability (ID) and visual impairment to children with single ID, single visual impairment and typical development on 'Child Behavior Check List/4-18' (CBCL/4-18), Parent Report. Children with ID and visual impairment had more emotional and behavioural problems than other groups of children: with single impairment and with typical development (F = 23.81; d.f.1/d.f.2 = 3/156; P < 0.001). All children with special needs had more emotional and behavioural problems than children with typical development. The highest difference was found in attention problems syndrome (F = 30.45; d.f.1/d.f.2 = 3/156; P < 0.001) where all groups of children with impairments had more problems. Children with visual impairment, with and without ID, had more somatic complaints than children with normal vision. Intellectual disability had greater influence on prevalence and kind of emotional and behavioural problems in children than visual impairment. © 2012 The Author. Journal of Intellectual Disability Research © 2012 Blackwell Publishing Ltd.

  11. Gravity influences top-down signals in visual processing.

    PubMed

    Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph

    2014-01-01

    Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.

  12. Visual Working Memory Capacity and Proactive Interference

    PubMed Central

    Hartshorne, Joshua K.

    2008-01-01

    Background Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Methodology/Principal Findings Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. Conclusions/Significance This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals. PMID:18648493

  13. Visual working memory capacity and proactive interference.

    PubMed

    Hartshorne, Joshua K

    2008-07-23

    Visual working memory capacity is extremely limited and appears to be relatively immune to practice effects or the use of explicit strategies. The recent discovery that visual working memory tasks, like verbal working memory tasks, are subject to proactive interference, coupled with the fact that typical visual working memory tasks are particularly conducive to proactive interference, suggests that visual working memory capacity may be systematically under-estimated. Working memory capacity was probed behaviorally in adult humans both in laboratory settings and via the Internet. Several experiments show that although the effect of proactive interference on visual working memory is significant and can last over several trials, it only changes the capacity estimate by about 15%. This study further confirms the sharp limitations on visual working memory capacity, both in absolute terms and relative to verbal working memory. It is suggested that future research take these limitations into account in understanding differences across a variety of tasks between human adults, prelinguistic infants and nonlinguistic animals.

  14. How visual attention is modified by disparities and textures changes?

    NASA Astrophysics Data System (ADS)

    Khaustova, Dar'ya; Fournier, Jérome; Wyckens, Emmanuel; Le Meur, Olivier

    2013-03-01

    The 3D image/video quality of experience is a multidimensional concept that depends on 2D image quality, depth quantity and visual comfort. The relationship between these parameters is not yet clearly defined. From this perspective, we aim to understand how texture complexity, depth quantity and visual comfort influence the way people observe 3D content in comparison with 2D. Six scenes with different structural parameters were generated using Blender software. For these six scenes, the following parameters were modified: texture complexity and the amount of depth changing the camera baseline and the convergence distance at the shooting side. Our study was conducted using an eye-tracker and a 3DTV display. During the eye-tracking experiment, each observer freely examined images with different depth levels and texture complexities. To avoid memory bias, we ensured that each observer had only seen scene content once. Collected fixation data were used to build saliency maps and to analyze differences between 2D and 3D conditions. Our results show that the introduction of disparity shortened saccade length; however fixation durations remained unaffected. An analysis of the saliency maps did not reveal any differences between 2D and 3D conditions for the viewing duration of 20 s. When the whole period was divided into smaller intervals, we found that for the first 4 s the introduced disparity was conducive to the section of saliency regions. However, this contribution is quite minimal if the correlation between saliency maps is analyzed. Nevertheless, we did not find that discomfort (comfort) had any influence on visual attention. We believe that existing metrics and methods are depth insensitive and do not reveal such differences. Based on the analysis of heat maps and paired t-tests of inter-observer visual congruency values we deduced that the selected areas of interest depend on texture complexities.

  15. Variation in polyp size estimation among endoscopists and impact on surveillance intervals.

    PubMed

    Chaptini, Louis; Chaaya, Adib; Depalma, Fedele; Hunter, Krystal; Peikin, Steven; Laine, Loren

    2014-10-01

    Accurate estimation of polyp size is important because it is used to determine the surveillance interval after polypectomy. To evaluate the variation and accuracy in polyp size estimation among endoscopists and the impact on surveillance intervals after polypectomy. Web-based survey. A total of 873 members of the American Society for Gastrointestinal Endoscopy. Participants watched video recordings of 4 polypectomies and were asked to estimate the polyp sizes. Proportion of participants with polyp size estimates within 20% of the correct measurement and the frequency of incorrect surveillance intervals based on inaccurate size estimates. Polyp size estimates were within 20% of the correct value for 1362 (48%) of 2812 estimates (range 39%-59% for the 4 polyps). Polyp size was overestimated by >20% in 889 estimates (32%, range 15%-49%) and underestimated by >20% in 561 (20%, range 4%-46%) estimates. Incorrect surveillance intervals because of overestimation or underestimation occurred in 272 (10%) of the 2812 estimates (range 5%-14%). Participants in a private practice setting overestimated the size of 3 or of all 4 polyps by >20% more often than participants in an academic setting (difference = 7%; 95% confidence interval, 1%-11%). Survey design with the use of video clips. Substantial overestimation and underestimation of polyp size occurs with visual estimation leading to incorrect surveillance intervals in 10% of cases. Our findings support routine use of measurement tools to improve polyp size estimates. Copyright © 2014 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  16. 3D Visualization of Machine Learning Algorithms with Astronomical Data

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2016-01-01

    We present innovative machine learning (ML) methods using unsupervised clustering with minimum spanning trees (MSTs) to study 3D astronomical catalogs. Utilizing Python code to build trees based on galaxy catalogs, we can render the results with the visualization suite Blender to produce interactive 360 degree panoramic videos. The catalogs and their ML results can be explored in a 3D space using mobile devices, tablets or desktop browsers. We compare the statistics of the MST results to a number of machine learning methods relating to optimization and efficiency.

  17. Performance of a semi-quantitative whole blood test for human heart-type fatty acid-binding protein (H-FABP).

    PubMed

    Hiura, Masahito; Nakajima, Osamu; Mori, Toshizumi; Kitano, Katsuya

    2005-10-01

    We evaluated the accuracy of visually reading the whole blood Rapicheck H-FABP panel test using the quantitative plasma H-FABP concentration as the reference. Consecutive patients with chest pain (n = 237) who were suspected of having acute myocardial infarction were recruited. The appearance of an evident test line within 5 min was given a grade of +3 (strongly positive), appearance within 15 min +2 (moderately positive) and the appearance of a weak test line within 15 min +1 (weakly positive). The concordance rates were 91.8% for positive, 70.1% for negative and 80.2% for overall. Plasma H-FABP concentrations were above the cut-off value for 9.2% of negative (0) results. Fifty percent of weakly positive (+1) and 25.0% of moderately positive (+2) results had H-FABP concentrations lower than the cut-off value. All of the strongly positive (+3) were above the cut-off value. These results suggested that the false-positive and false-negative results of Rapicheck H-FABP were caused by over or underestimation in visual reading when the plasma H-FABP concentration was near the cut-off concentration. Low accuracy of visual reading of Rapicheck H-FABP was due to poor estimation by manual reading around the cut-off value.

  18. Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes

    PubMed Central

    Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt

    2015-01-01

    This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects’ perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley’s (Vision Research 12 (1972) 323–332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. PMID:26525845

  19. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-03-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  20. Precise photorealistic visualization for restoration of historic buildings based on tacheometry data

    NASA Astrophysics Data System (ADS)

    Ragia, Lemonia; Sarri, Froso; Mania, Katerina

    2018-04-01

    This paper puts forward a 3D reconstruction methodology applied to the restoration of historic buildings taking advantage of the speed, range and accuracy of a total geodetic station. The measurements representing geo-referenced points produced an interactive and photorealistic geometric mesh of a monument named `Neoria.' `Neoria' is a Venetian building located by the old harbor at Chania, Crete, Greece. The integration of tacheometry acquisition and computer graphics puts forward a novel integrated software framework for the accurate 3D reconstruction of a historical building. The main technical challenge of this work was the production of a precise 3D mesh based on a sufficient number of tacheometry measurements acquired fast and at low cost, employing a combination of surface reconstruction and processing methods. A fully interactive application based on game engine technologies was developed. The user can visualize and walk through the monument and the area around it as well as photorealistically view it at different times of day and night. Advanced interactive functionalities are offered to the user in relation to identifying restoration areas and visualizing the outcome of such works. The user could visualize the coordinates of the points measured, calculate distances and navigate through the complete 3D mesh of the monument. The geographical data are stored in a database connected with the application. Features referencing and associating the database with the monument are developed. The goal was to utilize a small number of acquired data points and present a fully interactive visualization of a geo-referenced 3D model.

  1. Numerical simulation of runaway electrons: 3-D effects on synchrotron radiation and impurity-based runaway current dissipation

    NASA Astrophysics Data System (ADS)

    del-Castillo-Negrete, D.; Carbajal, L.; Spong, D.; Izzo, V.

    2018-05-01

    Numerical simulations of runaway electrons (REs) with a particular emphasis on orbit dependent effects in 3-D magnetic fields are presented. The simulations were performed using the recently developed Kinetic Orbit Runaway electron Code (KORC) that computes the full-orbit relativistic dynamics in prescribed electric and magnetic fields including radiation damping and collisions. The two main problems of interest are synchrotron radiation and impurity-based RE dissipation. Synchrotron radiation is studied in axisymmetric fields and in 3-D magnetic configurations exhibiting magnetic islands and stochasticity. For passing particles in axisymmetric fields, neglecting orbit effects might underestimate or overestimate the total radiation power depending on the direction of the radial shift of the drift orbits. For trapped particles, the spatial distribution of synchrotron radiation exhibits localized "hot" spots at the tips of the banana orbits. In general, the radiation power per particle for trapped particles is higher than the power emitted by passing particles. The spatial distribution of synchrotron radiation in stochastic magnetic fields, obtained using the MHD code NIMROD, is strongly influenced by the presence of magnetic islands. 3-D magnetic fields also introduce a toroidal dependence on the synchrotron spectra, and neglecting orbit effects underestimates the total radiation power. In the presence of magnetic islands, the radiation damping of trapped particles is larger than the radiation damping of passing particles. Results modeling synchrotron emission by RE in DIII-D quiescent plasmas are also presented. The computation uses EFIT reconstructed magnetic fields and RE energy distributions fitted to the experimental measurements. Qualitative agreement is observed between the numerical simulations and the experiments for simplified RE pitch angle distributions. However, it is noted that to achieve quantitative agreement, it is necessary to use pitch angle distributions that depart from simplified 2-D Fokker-Planck equilibria. Finally, using the guiding center orbit model (KORC-GC), a preliminary study of pellet mitigated discharges in DIII-D is presented. The dependence of RE energy decay and current dissipation on initial energy and ionization levels of neon impurities is studied. The computed decay rates are within the range of experimental observations.

  2. User experience while viewing stereoscopic 3D television

    PubMed Central

    Read, Jenny C.A.; Bohr, Iwo

    2014-01-01

    3D display technologies have been linked to visual discomfort and fatigue. In a lab-based study with a between-subjects design, 433 viewers aged from 4 to 82 years watched the same movie in either 2D or stereo 3D (S3D), and subjectively reported on a range of aspects of their viewing experience. Our results suggest that a minority of viewers, around 14%, experience adverse effects due to viewing S3D, mainly headache and eyestrain. A control experiment where participants viewed 2D content through 3D glasses suggests that around 8% may report adverse effects which are not due directly to viewing S3D, but instead are due to the glasses or to negative preconceptions about S3D (the ‘nocebo effect'). Women were slightly more likely than men to report adverse effects with S3D. We could not detect any link between pre-existing eye conditions or low stereoacuity and the likelihood of experiencing adverse effects with S3D. Practitioner Summary: Stereoscopic 3D (S3D) has been linked to visual discomfort and fatigue. Viewers watched the same movie in either 2D or stereo 3D (between-subjects design). Around 14% reported effects such as headache and eyestrain linked to S3D itself, while 8% report adverse effects attributable to 3D glasses or negative expectations. PMID:24874550

  3. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  4. Evaluating Alignment of Shapes by Ensemble Visualization

    PubMed Central

    Raj, Mukund; Mirzargar, Mahsa; Preston, J. Samuel; Kirby, Robert M.; Whitaker, Ross T.

    2016-01-01

    The visualization of variability in surfaces embedded in 3D, which is a type of ensemble uncertainty visualization, provides a means of understanding the underlying distribution of a collection or ensemble of surfaces. Although ensemble visualization for isosurfaces has been described in the literature, we conduct an expert-based evaluation of various ensemble visualization techniques in a particular medical imaging application: the construction of atlases or templates from a population of images. In this work, we extend contour boxplot to 3D, allowing us to evaluate it against an enumeration-style visualization of the ensemble members and other conventional visualizations used by atlas builders, namely examining the atlas image and the corresponding images/data provided as part of the construction process. We present feedback from domain experts on the efficacy of contour boxplot compared to other modalities when used as part of the atlas construction and analysis stages of their work. PMID:26186768

  5. View-Dependent Streamline Deformation and Exploration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tong, Xin; Edwards, John; Chen, Chun-Ming

    Occlusion presents a major challenge in visualizing 3D flow and tensor fields using streamlines. Displaying too many streamlines creates a dense visualization filled with occluded structures, but displaying too few streams risks losing important features. We propose a new streamline exploration approach by visually manipulating the cluttered streamlines by pulling visible layers apart and revealing the hidden structures underneath. This paper presents a customized view-dependent deformation algorithm and an interactive visualization tool to minimize visual cluttering for visualizing 3D vector and tensor fields. The algorithm is able to maintain the overall integrity of the fields and expose previously hidden structures.more » Our system supports both mouse and direct-touch interactions to manipulate the viewing perspectives and visualize the streamlines in depth. By using a lens metaphor of different shapes to select the transition zone of the targeted area interactively, the users can move their focus and examine the vector or tensor field freely.« less

  6. Interactive Visualization of Complex Seismic Data and Models Using Bokeh

    DOE PAGES

    Chai, Chengping; Ammon, Charles J.; Maceira, Monica; ...

    2018-02-14

    Visualizing multidimensional data and models becomes more challenging as the volume and resolution of seismic data and models increase. But thanks to the development of powerful and accessible computer systems, a model web browser can be used to visualize complex scientific data and models dynamically. In this paper, we present four examples of seismic model visualization using an open-source Python package Bokeh. One example is a visualization of a surface-wave dispersion data set, another presents a view of three-component seismograms, and two illustrate methods to explore a 3D seismic-velocity model. Unlike other 3D visualization packages, our visualization approach has amore » minimum requirement on users and is relatively easy to develop, provided you have reasonable programming skills. Finally, utilizing familiar web browsing interfaces, the dynamic tools provide us an effective and efficient approach to explore large data sets and models.« less

  7. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2009-09-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  8. Realistic terrain visualization based on 3D virtual world technology

    NASA Astrophysics Data System (ADS)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  9. Development of a 3-D Nuclear Event Visualization Program Using Unity

    NASA Astrophysics Data System (ADS)

    Kuhn, Victoria

    2017-09-01

    Simulations have become increasingly important for science and there is an increasing emphasis on the visualization of simulations within a Virtual Reality (VR) environment. Our group is exploring this capability as a visualization tool not just for those curious about science, but also for educational purposes for K-12 students. Using data collected in 3-D by a Time Projection Chamber (TPC), we are able to visualize nuclear and cosmic events. The Unity game engine was used to recreate the TPC to visualize these events and construct a VR application. The methods used to create these simulations will be presented along with an example of a simulation. I will also present on the development and testing of this program, which I carried out this past summer at MSU as part of an REU program. We used data from the S πRIT TPC, but the software can be applied to other 3-D detectors. This work is supported by the U.S. Department of Energy under Grant Nos. DE-SC0014530, DE-NA0002923 and US NSF under Grant No. PHY-1565546.

  10. Web3DMol: interactive protein structure visualization based on WebGL.

    PubMed

    Shi, Maoxiang; Gao, Juntao; Zhang, Michael Q

    2017-07-03

    A growing number of web-based databases and tools for protein research are being developed. There is now a widespread need for visualization tools to present the three-dimensional (3D) structure of proteins in web browsers. Here, we introduce our 3D modeling program-Web3DMol-a web application focusing on protein structure visualization in modern web browsers. Users submit a PDB identification code or select a PDB archive from their local disk, and Web3DMol will display and allow interactive manipulation of the 3D structure. Featured functions, such as sequence plot, fragment segmentation, measure tool and meta-information display, are offered for users to gain a better understanding of protein structure. Easy-to-use APIs are available for developers to reuse and extend Web3DMol. Web3DMol can be freely accessed at http://web3dmol.duapp.com/, and the source code is distributed under the MIT license. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Space Partitioning for Privacy Enabled 3D City Models

    NASA Astrophysics Data System (ADS)

    Filippovska, Y.; Wichmann, A.; Kada, M.

    2016-10-01

    Due to recent technological progress, data capturing and processing of highly detailed (3D) data has become extensive. And despite all prospects of potential uses, data that includes personal living spaces and public buildings can also be considered as a serious intrusion into people's privacy and a threat to security. It becomes especially critical if data is visible by the general public. Thus, a compromise is needed between open access to data and privacy requirements which can be very different for each application. As privacy is a complex and versatile topic, the focus of this work particularly lies on the visualization of 3D urban data sets. For the purpose of privacy enabled visualizations of 3D city models, we propose to partition the (living) spaces into privacy regions, each featuring its own level of anonymity. Within each region, the depicted 2D and 3D geometry and imagery is anonymized with cartographic generalization techniques. The underlying spatial partitioning is realized as a 2D map generated as a straight skeleton of the open space between buildings. The resulting privacy cells are then merged according to the privacy requirements associated with each building to form larger regions, their borderlines smoothed, and transition zones established between privacy regions to have a harmonious visual appearance. It is exemplarily demonstrated how the proposed method generates privacy enabled 3D city models.

  12. Quantitative visualization of synchronized insulin secretion from 3D-cultured cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Suzuki, Takahiro; Kanamori, Takao; Inouye, Satoshi

    Quantitative visualization of synchronized insulin secretion was performed in an isolated rat pancreatic islet and a spheroid of rat pancreatic beta cell line using a method of video-rate bioluminescence imaging. Video-rate images of insulin secretion from 3D-cultured cells were obtained by expressing the fusion protein of insulin and Gaussia luciferase (Insulin-GLase). A subclonal rat INS-1E cell line stably expressing Insulin-GLase, named iGL, was established and a cluster of iGL cells showed oscillatory insulin secretion that was completely synchronized in response to high glucose. Furthermore, we demonstrated the effect of an antidiabetic drug, glibenclamide, on synchronized insulin secretion from 2D- andmore » 3D-cultured iGL cells. The amount of secreted Insulin-GLase from iGL cells was also determined by a luminometer. Thus, our bioluminescence imaging method could generally be used for investigating protein secretion from living 3D-cultured cells. In addition, iGL cell line would be valuable for evaluating antidiabetic drugs. - Highlights: • An imaging method for protein secretion from 3D-cultured cells was established. • The fused protein of insulin to GLase, Insulin-GLase, was used as a reporter. • Synchronous insulin secretion was visualized in rat islets and spheroidal beta cells. • A rat beta cell line stably expressing Insulin-GLase, named iGL, was established. • Effect of an antidiabetic drug on insulin secretion was visualized in iGL cells.« less

  13. 3-D Surface Visualization of pH Titration "Topos": Equivalence Point Cliffs, Dilution Ramps, and Buffer Plateaus

    ERIC Educational Resources Information Center

    Smith, Garon C.; Hossain, Md Mainul; MacCarthy, Patrick

    2014-01-01

    3-D topographic surfaces ("topos") can be generated to visualize how pH behaves during titration and dilution procedures. The surfaces are constructed by plotting computed pH values above a composition grid with volume of base added in one direction and overall system dilution on the other. What emerge are surface features that…

  14. Examining the Conceptual Understandings of Geoscience Concepts of Students with Visual Impairments: Implications of 3-D Printing

    ERIC Educational Resources Information Center

    Koehler, Karen E.

    2017-01-01

    The purpose of this qualitative study was to explore the use of 3-D printed models as an instructional tool in a middle school science classroom for students with visual impairments and compare their use to traditional tactile graphics for aiding conceptual understanding of geoscience concepts. Specifically, this study examined if the students'…

  15. 40 CFR Table 7 to Subpart Ggg of... - Wastewater-Inspection and Monitoring Requirements for Waste Management Units

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... or monitoring Method TANKS: 63.1256(b)(3)(i) Inspect fixed roof and all openings for leaks Initially... openings for leaks Initially Semiannually Visual. 63.1256(c)(2) Inspect surface impoundment for control....1256(d)(1)(ii) Inspect cover and all openings for leaks Initially Semiannually Visual. 63.1256(d)(3)(i...

  16. Visualization of Stereoscopic Anatomic Models of the Paranasal Sinuses and Cervical Vertebrae from the Surgical and Procedural Perspective

    ERIC Educational Resources Information Center

    Chen, Jian; Smith, Andrew D.; Khan, Majid A.; Sinning, Allan R.; Conway, Marianne L.; Cui, Dongmei

    2017-01-01

    Recent improvements in three-dimensional (3D) virtual modeling software allows anatomists to generate high-resolution, visually appealing, colored, anatomical 3D models from computed tomography (CT) images. In this study, high-resolution CT images of a cadaver were used to develop clinically relevant anatomic models including facial skull, nasal…

  17. The Impact of Co-Presence and Visual Elements in 3D VLEs on Interpersonal Emotional Connection in Telecollaboration

    ERIC Educational Resources Information Center

    Matsui, Hisae

    2014-01-01

    The purpose of this study is to examine participant's perception of the usefulness of the visual elements in 3D Virtual Learning Environments, which represent co-presence, in developing interpersonal emotional connections with their partners in the initial stage of telecollaboration. To fulfill the purpose, two Japanese students and two American…

  18. High-resolution T1-weighted 3D real IR imaging of the temporal bone using triple-dose contrast material.

    PubMed

    Naganawa, Shinji; Koshikawa, Tokiko; Nakamura, Tatsuya; Fukatsu, Hiroshi; Ishigaki, Takeo; Aoki, Ikuo

    2003-12-01

    The small structures in the temporal bone are surrounded by bone and air. The objectives of this study were (a) to compare contrast-enhanced T1-weighted images acquired by fast spin-echo-based three-dimensional real inversion recovery (3D rIR) against those acquired by gradient echo-based 3D SPGR in the visualization of the enhancement of small structures in the temporal bone, and (b) to determine whether either 3D rIR or 3D SPGR is useful for visualizing enhancement of the cochlear lymph fluid. Seven healthy men (age range 27-46 years) volunteered to participate in this study. All MR imaging was performed using a dedicated bilateral quadrature surface phased-array coil for temporal bone imaging at 1.5 T (Visart EX, Toshiba, Tokyo, Japan). The 3D rIR images (TR/TE/TI: 1800 ms/10 ms/500 ms) and flow-compensated 3D SPGR images (TR/TE/FA: 23 ms/10 ms/25 degrees) were obtained with a reconstructed voxel size of 0.6 x 0.7 x 0.8 mm3. Images were acquired before and 1, 90, 180, and 270 min after the administration of triple-dose Gd-DTPA-BMA (0.3 mmol/kg). In post-contrast MR images, the degree of enhancement of the cochlear aqueduct, endolymphatic sac, subarcuate artery, geniculate ganglion of the facial nerve, and cochlear lymph fluid space was assessed by two radiologists. The degree of enhancement was scored as follows: 0 (no enhancement); 1 (slight enhancement); 2 (intermediate between 1 and 3); and 3 (enhancement similar to that of vessels). Enhancement scores for the endolymphatic sac, subarcuate artery, and geniculate ganglion were higher in 3D rIR than in 3D SPGR. Washout of enhancement in the endolymphatic sac appeared to be delayed compared with that in the subarcuate artery, suggesting that the enhancement in the endolymphatic sac may have been due in part to non-vascular tissue enhancement. Enhancement of the cochlear lymph space was not observed in any of the subjects in 3D rIR and 3D SPGR. The 3D rIR sequence may be more sensitive than the 3D SPGR sequence in visualizing the enhancement of small structures in the temporal bone; however, enhancement of the cochlear fluid space could not be visualized even with 3D rIR, triple-dose contrast, and dedicated coils at 1.5 T.

  19. 2D/3D Visual Tracker for Rover Mast

    NASA Technical Reports Server (NTRS)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems that require coordination of vision and robotic motion.

  20. Large-Scale Partial-Duplicate Image Retrieval and Its Applications

    DTIC Science & Technology

    2016-04-23

    SECURITY CLASSIFICATION OF: The explosive growth of Internet Media (partial-duplicate/similar images, 3D objects, 3D models, etc.) sheds bright...light on many promising applications in forensics, surveillance, 3D animation, mobile visual search, and 3D model/object search. Compared with the...and stable spatial configuration. Compared with the general 2D objects, 3D models/objects consist of 3D data information (typically a list of

  1. Time-resolved 3D contrast-enhanced MRA on 3.0T: a non-invasive follow-up technique after stent-assisted coil embolization of the intracranial aneurysm.

    PubMed

    Choi, Jin Woo; Roh, Hong Gee; Moon, Won-Jin; Kim, Na Ra; Moon, Sung Gyu; Kang, Chung Hwan; Chun, Young Il; Kang, Hyun-Seung

    2011-01-01

    To evaluate the usefulness of time-resolved contrast enhanced magnetic resonance angiography (4D MRA) after stent-assisted coil embolization by comparing it with time of flight (TOF)-MRA. TOF-MRA and 4D MRA were obtained by 3T MRI in 26 patients treated with stent-assisted coil embolization (Enterprise:Neuroform = 7:19). The qualities of the MRA were rated on a graded scale of 0 to 4. We classified completeness of endovascular treatment into three categories. The degree of quality of visualization of the stented artery was compared between TOF and 4D MRA by the Wilcoxon signed rank test. We used the Mann-Whitney U test for comparing the quality of the visualization of the stented artery according to the stent type in each MRA method. The quality in terms of the visualization of the stented arteries in 4D MRA was significantly superior to that in 3D TOF-MRA, regardless of type of the stent (p < 0.001). The quality of the arteries which were stented with Neuroform was superior to that of the arteries stented with Enterprise in 3D TOF (p < 0.001) and 4D MRA (p = 0.008), respectively. 4D MRA provides a higher quality view of the stented parent arteries when compared with TOF.

  2. Quantitative analysis of in situ optical diagnostics for inferring particle/aggregate parameters in flames: Implications for soot surface growth and total emissivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeylue, U.O.

    1997-05-01

    An in situ particulate diagnostic/analysis technique is outlined based on the Rayleigh-Debye-Gans polydisperse fractal aggregate (RDG/PFA) scattering interpretation of absolute angular light scattering and extinction measurements. Using proper particle refractive index, the proposed data analysis method can quantitatively yield all aggregate parameters (particle volume fraction, f{sub v}, fractal dimension, D{sub f}, primary particle diameter, d{sub p}, particle number density, n{sub p}, and aggregate size distribution, pdf(N)) without any prior knowledge about the particle-laden environment. The present optical diagnostic/interpretation technique was applied to two different soot-containing laminar and turbulent ethylene/air nonpremixed flames in order to assess its reliability. The aggregate interpretationmore » of optical measurements yielded D{sub f}, d{sub p}, and pdf(N) that are in excellent agreement with ex situ thermophoretic sampling/transmission electron microscope (TS/TEM) observations within experimental uncertainties. However, volume-equivalent single particle models (Rayleigh/Mie) overestimated d{sub p} by about a factor of 3, causing an order of magnitude underestimation in n{sub p}. Consequently, soot surface areas and growth rates were in error by a factor of 3, emphasizing that aggregation effects need to be taken into account when using optical diagnostics for a reliable understanding of soot formation/evolution mechanism in flames. The results also indicated that total soot emissivities were generally underestimated using Rayleigh analysis (up to 50%), mainly due to the uncertainties in soot refractive indices at infrared wavelengths. This suggests that aggregate considerations may not be essential for reasonable radiation heat transfer predictions from luminous flames because of fortuitous error cancellation, resulting in typically a 10 to 30% net effect.« less

  3. Automated UAV-based video exploitation using service oriented architecture framework

    NASA Astrophysics Data System (ADS)

    Se, Stephen; Nadeau, Christian; Wood, Scott

    2011-05-01

    Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles (UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours of received data. At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking, change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework. The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA framework. The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services to be upgraded easily and allows the intensive video processing to run on powerful workstations. MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence, surveillance, and reconnaissance information to commanders and front-line soldiers.

  4. Solid shape discrimination from vision and haptics: natural objects (Capsicum annuum) and Gibson's "feelies".

    PubMed

    Norman, J Farley; Phillips, Flip; Holmin, Jessica S; Norman, Hideko F; Beers, Amanda M; Boswell, Alexandria M; Cheeseman, Jacob R; Stethen, Angela G; Ronning, Cecilia

    2012-10-01

    A set of three experiments evaluated 96 participants' ability to visually and haptically discriminate solid object shape. In the past, some researchers have found haptic shape discrimination to be substantially inferior to visual shape discrimination, while other researchers have found haptics and vision to be essentially equivalent. A primary goal of the present study was to understand these discrepant past findings and to determine the true capabilities of the haptic system. All experiments used the same task (same vs. different shape discrimination) and stimulus objects (James Gibson's "feelies" and a set of naturally shaped objects--bell peppers). However, the methodology varied across experiments. Experiment 1 used random 3-dimensional (3-D) orientations of the stimulus objects, and the conditions were full-cue (active manipulation of objects and rotation of the visual objects in depth). Experiment 2 restricted the 3-D orientations of the stimulus objects and limited the haptic and visual information available to the participants. Experiment 3 compared restricted and full-cue conditions using random 3-D orientations. We replicated both previous findings in the current study. When we restricted visual and haptic information (and placed the stimulus objects in the same orientation on every trial), the participants' visual performance was superior to that obtained for haptics (replicating the earlier findings of Davidson et al. in Percept Psychophys 15(3):539-543, 1974). When the circumstances resembled those of ordinary life (e.g., participants able to actively manipulate objects and see them from a variety of perspectives), we found no significant difference between visual and haptic solid shape discrimination.

  5. Does 3D produce more symptoms of visually induced motion sickness?

    PubMed

    Naqvi, Syed Ali Arsalan; Badruddin, Nasreen; Malik, Aamir Saeed; Hazabbah, Wan; Abdullah, Baharudin

    2013-01-01

    3D stereoscopy technology with high quality images and depth perception provides entertainment to its viewers. However, the technology is not mature yet and sometimes may have adverse effects on viewers. Some viewers have reported discomfort in watching videos with 3D technology. In this research we performed an experiment showing a movie in 2D and 3D environments to participants. Subjective and objective data are recorded and compared in both conditions. Results from subjective reporting shows that Visually Induced Motion Sickness (VIMS) is significantly higher in 3D condition. For objective measurement, ECG data is recorded to find the Heart Rate Variability (HRV), where the LF/HF ratio, which is the index of sympathetic nerve activity, is analyzed to find the changes in the participants' feelings over time. The average scores of nausea, disorientation and total score of SSQ show that there is a significant difference in the 3D condition from 2D. However, LF/HF ratio is not showing significant difference throughout the experiment.

  6. Visualizing Astronomical Data with Blender

    NASA Astrophysics Data System (ADS)

    Kent, Brian R.

    2014-01-01

    We present methods for using the 3D graphics program Blender in the visualization of astronomical data. The software's forte for animating 3D data lends itself well to use in astronomy. The Blender graphical user interface and Python scripting capabilities can be utilized in the generation of models for data cubes, catalogs, simulations, and surface maps. We review methods for data import, 2D and 3D voxel texture applications, animations, camera movement, and composite renders. Rendering times can be improved by using graphic processing units (GPUs). A number of examples are shown using the software features most applicable to various kinds of data paradigms in astronomy.

  7. KEPLER-14b: A MASSIVE HOT JUPITER TRANSITING AN F STAR IN A CLOSE VISUAL BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchhave, Lars A.; Latham, David W.; Carter, Joshua A.

    We present the discovery of a hot Jupiter transiting an F star in a close visual (0.''3 sky projected angular separation) binary system. The dilution of the host star's light by the nearly equal magnitude stellar companion ({approx}0.5 mag fainter) significantly affects the derived planetary parameters, and if left uncorrected, leads to an underestimate of the radius and mass of the planet by 10% and 60%, respectively. Other published exoplanets, which have not been observed with high-resolution imaging, could similarly have unresolved stellar companions and thus have incorrectly derived planetary parameters. Kepler-14b (KOI-98) has a period of P = 6.790more » days and, correcting for the dilution, has a mass of M{sub p} = 8.40{sup +0.35}{sub -0.34} M{sub J} and a radius of R{sub p} = 1.136{sup +0.073}{sub -0.054} R{sub J}, yielding a mean density of {rho}{sub p} = 7.1 {+-} 1.1 g cm{sup -3}.« less

  8. Research on Visualization of Ground Laser Radar Data Based on Osg

    NASA Astrophysics Data System (ADS)

    Huang, H.; Hu, C.; Zhang, F.; Xue, H.

    2018-04-01

    Three-dimensional (3D) laser scanning is a new advanced technology integrating light, machine, electricity, and computer technologies. It can conduct 3D scanning to the whole shape and form of space objects with high precision. With this technology, you can directly collect the point cloud data of a ground object and create the structure of it for rendering. People use excellent 3D rendering engine to optimize and display the 3D model in order to meet the higher requirements of real time realism rendering and the complexity of the scene. OpenSceneGraph (OSG) is an open source 3D graphics engine. Compared with the current mainstream 3D rendering engine, OSG is practical, economical, and easy to expand. Therefore, OSG is widely used in the fields of virtual simulation, virtual reality, science and engineering visualization. In this paper, a dynamic and interactive ground LiDAR data visualization platform is constructed based on the OSG and the cross-platform C++ application development framework Qt. In view of the point cloud data of .txt format and the triangulation network data file of .obj format, the functions of 3D laser point cloud and triangulation network data display are realized. It is proved by experiments that the platform is of strong practical value as it is easy to operate and provides good interaction.

  9. Visual Semantic Based 3D Video Retrieval System Using HDFS.

    PubMed

    Kumar, C Ranjith; Suguna, S

    2016-08-01

    This paper brings out a neoteric frame of reference for visual semantic based 3d video search and retrieval applications. Newfangled 3D retrieval application spotlight on shape analysis like object matching, classification and retrieval not only sticking up entirely with video retrieval. In this ambit, we delve into 3D-CBVR (Content Based Video Retrieval) concept for the first time. For this purpose, we intent to hitch on BOVW and Mapreduce in 3D framework. Instead of conventional shape based local descriptors, we tried to coalesce shape, color and texture for feature extraction. For this purpose, we have used combination of geometric & topological features for shape and 3D co-occurrence matrix for color and texture. After thriving extraction of local descriptors, TB-PCT (Threshold Based- Predictive Clustering Tree) algorithm is used to generate visual codebook and histogram is produced. Further, matching is performed using soft weighting scheme with L 2 distance function. As a final step, retrieved results are ranked according to the Index value and acknowledged to the user as a feedback .In order to handle prodigious amount of data and Efficacious retrieval, we have incorporated HDFS in our Intellection. Using 3D video dataset, we future the performance of our proposed system which can pan out that the proposed work gives meticulous result and also reduce the time intricacy.

  10. A methodological evaluation of volumetric measurement techniques including three-dimensional imaging in breast surgery.

    PubMed

    Hoeffelin, H; Jacquemin, D; Defaweux, V; Nizet, J L

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery.

  11. A Methodological Evaluation of Volumetric Measurement Techniques including Three-Dimensional Imaging in Breast Surgery

    PubMed Central

    Hoeffelin, H.; Jacquemin, D.; Defaweux, V.; Nizet, J L.

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery. PMID:24511536

  12. Influence of speed and step frequency during walking and running on motion sensor output.

    PubMed

    Rowlands, Ann V; Stone, Michelle R; Eston, Roger G

    2007-04-01

    Studies have reported strong linear relationships between accelerometer output and walking/running speeds up to 10 km x h(-1). However, ActiGraph uniaxial accelerometer counts plateau at higher speeds. The aim of this study was to determine the relationships of triaxial accelerometry, uniaxial accelerometry, and pedometry with speed and step frequency (SF) across a range of walking and running speeds. Nine male runners wore two ActiGraph uniaxial accelerometers, two RT3 triaxial accelerometers (all set at a 1-s epoch), and two Yamax pedometers. Each participant walked for 60 s at 4 and 6 km x h(-1), ran for 60 s at 10, 12, 14, 16, and 18 km x h(-1), and ran for 30 s at 20, 22, 24, and 26 km x h(-1). Step frequency was recorded by a visual count. ActiGraph counts peaked at 10 km x h(-10 (2.5-3.0 Hz SF) and declined thereafter (r=0.02, P>0.05). After correction for frequency-dependent filtering, output plateaued at 10 km x h(-1) but did not decline (r=0.77, P<0.05). Similarly, RT3 vertical counts plateaued at speeds > 10 km x h(-1) (r=0.86, P<0.01). RT3 vector magnitude and anteroposterior and mediolateral counts maintained a linear relationship with speed (r>0.96, P<0.001). Step frequency assessed by pedometry compared well with actual step frequency up to 20 km x h(-1) (approximately 3.5 Hz) but then underestimated actual steps (Yamax r=0.97; ActiGraph pedometer r=0.88, both P<0.001). Increasing underestimation of activity by the ActiGraph as speed increases is related to frequency-dependent filtering and assessment of acceleration in the vertical plane only. RT3 vector magnitude was strongly related to speed, reflecting the predominance of horizontal acceleration at higher speeds. These results indicate that high-intensity activity is underestimated by the ActiGraph, even after correction for frequency-dependent filtering, but not by the RT3. Pedometer output is highly correlated with step frequency.

  13. Amazing Space: Explanations, Investigations, & 3D Visualizations

    NASA Astrophysics Data System (ADS)

    Summers, Frank

    2011-05-01

    The Amazing Space website is STScI's online resource for communicating Hubble discoveries and other astronomical wonders to students and teachers everywhere. Our team has developed a broad suite of materials, readings, activities, and visuals that are not only engaging and exciting, but also standards-based and fully supported so that they can be easily used within state and national curricula. These products include stunning imagery, grade-level readings, trading card games, online interactives, and scientific visualizations. We are currently exploring the potential use of stereo 3D in astronomy education.

  14. Ideal Positions: 3D Sonography, Medical Visuality, Popular Culture.

    PubMed

    Seiber, Tim

    2016-03-01

    As digital technologies are integrated into medical environments, they continue to transform the experience of contemporary health care. Importantly, medicine is increasingly visual. In the history of sonography, visibility has played an important role in accessing fetal bodies for diagnostic and entertainment purposes. With the advent of three-dimensional (3D) rendering, sonography presents the fetus visually as already a child. The aesthetics of this process and the resulting imagery, made possible in digital networks, discloses important changes in the relationship between technology and biology, reproductive health and political debates, and biotechnology and culture.

  15. Vision and Action

    DTIC Science & Technology

    1994-06-01

    Recent results from Cognitive Neurophysiology-the discipline which is concerned, among other topics, with the study of visual agnosia (a condition...and A. Newell. GPS: A Case Study in Generality and Problem Solving. Academic Press, New York, 1969. [13] M. Farah. Visual Agnosia : Diesorders of Object...A Case Study of Visual Agnosia . Lawrence Erlbaum, Hillsdale, New Jersey, 1992. [31] D. Jacobs. Space efficient 3d model indexing. In Proc. IEEE

  16. Visualizing Terrestrial and Aquatic Systems in 3D - in IEEE VisWeek 2014

    EPA Science Inventory

    The need for better visualization tools for environmental science is well documented, and the Visualization for Terrestrial and Aquatic Systems project (VISTAS) aims to both help scientists produce effective environmental science visualizations and to determine which visualizatio...

  17. Applying Open Source Game Engine for Building Visual Simulation Training System of Fire Fighting

    NASA Astrophysics Data System (ADS)

    Yuan, Diping; Jin, Xuesheng; Zhang, Jin; Han, Dong

    There's a growing need for fire departments to adopt a safe and fair method of training to ensure that the firefighting commander is in a position to manage a fire incident. Visual simulation training systems, with their ability to replicate and interact with virtual fire scenarios through the use of computer graphics or VR, become an effective and efficient method for fire ground education. This paper describes the system architecture and functions of a visual simulated training system of fire fighting on oil storage, which adopting Delat3D, a open source game and simulation engine, to provide realistic 3D views. It presents that using open source technology provides not only the commercial-level 3D effects but also a great reduction of cost.

  18. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    NASA Astrophysics Data System (ADS)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  19. [Visualization of Anterolateral Ligament of the Knee Using 3D Reconstructed Variable Refocus Flip Angle-Turbo Spin Echo T2 Weighted Image].

    PubMed

    Yokosawa, Kenta; Sasaki, Kana; Muramatsu, Koichi; Ono, Tomoya; Izawa, Hiroyuki; Hachiya, Yudo

    2016-05-01

    Anterolateral ligament (ALL) is one of the lateral structures in the knee that contributes to the internal rotational stability of tibia. ALL has been referred to in some recent reports to re-emphasize its importance. We visualized the ALL on 3D-MRI in 32 knees of 27 healthy volunteers (23 male knees, 4 female knees; mean age: 37 years). 3D-MRIs were performed using 1.5-T scanner [T(2) weighted image (WI), SPACE: Sampling Perfection with Application optimized Contrast using different flip angle Evolutions] in the knee extended positions. The visualization rate of the ALL, the mean angle to the lateral collateral ligament (LCL), and the width and the thickness of the ALL at the joint level were investigated. The visualization rate was 100%. The mean angle to the LCL was 10.6 degrees. The mean width and the mean thickness of the ALL were 6.4 mm and 1.0 mm, respectively. The ALL is a very thin ligament with a somewhat oblique course between the lateral femoral epicondyle and the mid-third area of lateral tibial condyle. Therefore, the slice thickness and the slice angle can easily affect the ALL visualization. 3D-MRI enables acquiring thin-slice imaging data over a relatively short time, and arbitrary sections aligned with the course of the ALL can later be selected.

  20. Tutorial on Biostatistics: Linear Regression Analysis of Continuous Correlated Eye Data

    PubMed Central

    Ying, Gui-shuang; Maguire, Maureen G; Glynn, Robert; Rosner, Bernard

    2017-01-01

    Purpose To describe and demonstrate appropriate linear regression methods for analyzing correlated continuous eye data. Methods We describe several approaches to regression analysis involving both eyes, including mixed effects and marginal models under various covariance structures to account for inter-eye correlation. We demonstrate, with SAS statistical software, applications in a study comparing baseline refractive error between one eye with choroidal neovascularization (CNV) and the unaffected fellow eye, and in a study determining factors associated with visual field data in the elderly. Results When refractive error from both eyes were analyzed with standard linear regression without accounting for inter-eye correlation (adjusting for demographic and ocular covariates), the difference between eyes with CNV and fellow eyes was 0.15 diopters (D; 95% confidence interval, CI −0.03 to 0.32D, P=0.10). Using a mixed effects model or a marginal model, the estimated difference was the same but with narrower 95% CI (0.01 to 0.28D, P=0.03). Standard regression for visual field data from both eyes provided biased estimates of standard error (generally underestimated) and smaller P-values, while analysis of the worse eye provided larger P-values than mixed effects models and marginal models. Conclusion In research involving both eyes, ignoring inter-eye correlation can lead to invalid inferences. Analysis using only right or left eyes is valid, but decreases power. Worse-eye analysis can provide less power and biased estimates of effect. Mixed effects or marginal models using the eye as the unit of analysis should be used to appropriately account for inter-eye correlation and maximize power and precision. PMID:28102741

Top