Sample records for virtual viewpoint images

  1. Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering

    NASA Astrophysics Data System (ADS)

    Jiang, Lu; Piao, Yan

    2018-04-01

    The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.

  2. Virtual viewpoint generation for three-dimensional display based on the compressive light field

    NASA Astrophysics Data System (ADS)

    Meng, Qiao; Sang, Xinzhu; Chen, Duo; Guo, Nan; Yan, Binbin; Yu, Chongxiu; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Virtual view-point generation is one of the key technologies the three-dimensional (3D) display, which renders the new scene image perspective with the existing viewpoints. The three-dimensional scene information can be effectively recovered at different viewing angles to allow users to switch between different views. However, in the process of multiple viewpoints matching, when N free viewpoints are received, we need to match N viewpoints each other, namely matching C 2N = N(N-1)/2 times, and even in the process of matching different baselines errors can occur. To address the problem of great complexity of the traditional virtual view point generation process, a novel and rapid virtual view point generation algorithm is presented in this paper, and actual light field information is used rather than the geometric information. Moreover, for better making the data actual meaning, we mainly use nonnegative tensor factorization(NTF). A tensor representation is introduced for virtual multilayer displays. The light field emitted by an N-layer, M-frame display is represented by a sparse set of non-zero elements restricted to a plane within an Nth-order, rank-M tensor. The tensor representation allows for optimal decomposition of a light field into time-multiplexed, light-attenuating layers using NTF. Finally, the compressive light field of multilayer displays information synthesis is used to obtain virtual view-point by multiple multiplication. Experimental results show that the approach not only the original light field is restored with the high image quality, whose PSNR is 25.6dB, but also the deficiency of traditional matching is made up and any viewpoint can obtained from N free viewpoints.

  3. Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation

    NASA Astrophysics Data System (ADS)

    Inamoto, Naho; Saito, Hideo

    2003-06-01

    This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..

  4. Virtual view image synthesis for eye-contact in TV conversation system

    NASA Astrophysics Data System (ADS)

    Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae

    2010-02-01

    Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.

  5. Viewpoint Dependent Imaging: An Interactive Stereoscopic Display

    NASA Astrophysics Data System (ADS)

    Fisher, Scott

    1983-04-01

    Design and implementation of a viewpoint Dependent imaging system is described. The resultant display is an interactive, lifesize, stereoscopic image. that becomes a window into a three dimensional visual environment. As the user physically changes his viewpoint of the represented data in relation to the display surface, the image is continuously updated. The changing viewpoints are retrieved from a comprehensive, stereoscopic image array stored on computer controlled, optical videodisc and fluidly presented. in coordination with the viewer's, movements as detected by a body-tracking device. This imaging system is an attempt to more closely represent an observers interactive perceptual experience of the visual world by presenting sensory information cues not offered by traditional media technologies: binocular parallax, motion parallax, and motion perspective. Unlike holographic imaging, this display requires, relatively low bandwidth.

  6. Reference frames in virtual spatial navigation are viewpoint dependent

    PubMed Central

    Török, Ágoston; Nguyen, T. Peter; Kolozsvári, Orsolya; Buchanan, Robert J.; Nadasdy, Zoltan

    2014-01-01

    Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory. PMID:25249956

  7. Reference frames in virtual spatial navigation are viewpoint dependent.

    PubMed

    Török, Agoston; Nguyen, T Peter; Kolozsvári, Orsolya; Buchanan, Robert J; Nadasdy, Zoltan

    2014-01-01

    Spatial navigation in the mammalian brain relies on a cognitive map of the environment. Such cognitive maps enable us, for example, to take the optimal route from a given location to a known target. The formation of these maps is naturally influenced by our perception of the environment, meaning it is dependent on factors such as our viewpoint and choice of reference frame. Yet, it is unknown how these factors influence the construction of cognitive maps. Here, we evaluated how various combinations of viewpoints and reference frames affect subjects' performance when they navigated in a bounded virtual environment without landmarks. We measured both their path length and time efficiency and found that (1) ground perspective was associated with egocentric frame of reference, (2) aerial perspective was associated with allocentric frame of reference, (3) there was no appreciable performance difference between first and third person egocentric viewing positions and (4) while none of these effects were dependent on gender, males tended to perform better in general. Our study provides evidence that there are inherent associations between visual perspectives and cognitive reference frames. This result has implications about the mechanisms of path integration in the human brain and may also inspire designs of virtual reality applications. Lastly, we demonstrated the effective use of a tablet PC and spatial navigation tasks for studying spatial and cognitive aspects of human memory.

  8. The effects of viewpoint on the virtual space of pictures

    NASA Technical Reports Server (NTRS)

    Sedgwick, H. A.

    1989-01-01

    Pictorial displays whose primary purpose is to convey accurate information about the 3-D spatial layout of an environment are discussed. How and how well, pictures can convey such information is discussed. It is suggested that picture perception is not best approached as a unitary, indivisible process. Rather, it is a complex process depending on multiple, partially redundant, interacting sources of visual information for both the real surface of the picture and the virtual space beyond. Each picture must be assessed for the particular information that it makes available. This will determine how accurately the virtual space represented by the picture is seen, as well as how it is distorted when seen from the wrong viewpoint.

  9. Development of Allocentric Spatial Recall from New Viewpoints in Virtual Reality

    ERIC Educational Resources Information Center

    Negen, James; Heywood-Everett, Edward; Roome, Hannah E.; Nardini, Marko

    2018-01-01

    Using landmarks and other scene features to recall locations from new viewpoints is a critical skill in spatial cognition. In an immersive virtual reality task, we asked children 3.5-4.5 years old to remember the location of a target using various cues. On some trials they could use information from their own self-motion. On some trials they could…

  10. High precision analysis of an embryonic extensional fault-related fold using 3D orthorectified virtual outcrops: The viewpoint importance in structural geology

    NASA Astrophysics Data System (ADS)

    Tavani, Stefano; Corradetti, Amerigo; Billi, Andrea

    2016-05-01

    Image-based 3D modeling has recently opened the way to the use of virtual outcrop models in geology. An intriguing application of this method involves the production of orthorectified images of outcrops using almost any user-defined point of view, so that photorealistic cross-sections suitable for numerous geological purposes and measurements can be easily generated. These purposes include the accurate quantitative analysis of fault-fold relationships starting from imperfectly oriented and partly inaccessible real outcrops. We applied the method of image-based 3D modeling and orthorectification to a case study from the northern Apennines, Italy, where an incipient extensional fault affecting well-layered limestones is exposed on a 10-m-high barely accessible cliff. Through a few simple steps, we constructed a high-quality image-based 3D model of the outcrop. In the model, we made a series of measurements including fault and bedding attitudes, which allowed us to derive the bedding-fault intersection direction. We then used this direction as viewpoint to obtain a distortion-free photorealistic cross-section, on which we measured bed dips and thicknesses as well as fault stratigraphic separations. These measurements allowed us to identify a slight difference (i.e. only 0.5°) between the hangingwall and footwall cutoff angles. We show that the hangingwall strain required to compensate the upward-decreasing displacement of the fault was accommodated by this 0.5° rotation (i.e. folding) and coeval 0.8% thickening of strata in the hangingwall relatively to footwall strata. This evidence is consistent with trishear fault-propagation folding. Our results emphasize the viewpoint importance in structural geology and therefore the potential of using orthorectified virtual outcrops.

  11. Repetition Blindness for Natural Images of Objects with Viewpoint Changes

    PubMed Central

    Buffat, Stéphane; Plantier, Justin; Roumes, Corinne; Lorenceau, Jean

    2013-01-01

    When stimuli are repeated in a rapid serial visual presentation (RSVP), observers sometimes fail to report the second occurrence of a target. This phenomenon is referred to as “repetition blindness” (RB). We report an RSVP experiment with photographs in which we manipulated object viewpoints between the first and second occurrences of a target (0°, 45°, or 90° changes), and spatial frequency (SF) content. Natural images were spatially filtered to produce low, medium, or high SF stimuli. RB was observed for all filtering conditions. Surprisingly, for full-spectrum (FS) images, RB increased significantly as the viewpoint reached 90°. For filtered images, a similar pattern of results was found for all conditions except for medium SF stimuli. These findings suggest that object recognition in RSVP are subtended by viewpoint-specific representations for all spatial frequencies except medium ones. PMID:23346069

  12. Virtual viewpoint synthesis in multi-view video system

    NASA Astrophysics Data System (ADS)

    Li, Fang; Yang, Shiqiang

    2005-07-01

    In this paper, we present a virtual viewpoint video synthesis algorithm to satisfy the following three aims: low computing consuming; real time interpolation and acceptable video quality. In contrast with previous technologies, this method obtain incompletely 3D structure using neighbor video sources instead of getting total 3D information with all video sources, so that the computation is reduced greatly. So we demonstrate our interactive multi-view video synthesis algorithm in a personal computer. Furthermore, adopting the method of choosing feature points to build the correspondence between the frames captured by neighbor cameras, we need not require camera calibration. Finally, our method can be used when the angle between neighbor cameras is 25-30 degrees that it is much larger than common computer vision experiments. In this way, our method can be applied into many applications such as sports live, video conference, etc.

  13. Virtual digital library

    NASA Astrophysics Data System (ADS)

    Thoma, George R.

    1996-03-01

    The virtual digital library, a concept that is quickly becoming a reality, offers rapid and geography-independent access to stores of text, images, graphics, motion video and other datatypes. Furthermore, a user may move from one information source to another through hypertext linkages. The projects described here further the notion of such an information paradigm from an end user viewpoint.

  14. Development of MPEG standards for 3D and free viewpoint video

    NASA Astrophysics Data System (ADS)

    Smolic, Aljoscha; Kimata, Hideaki; Vetro, Anthony

    2005-11-01

    An overview of 3D and free viewpoint video is given in this paper with special focus on related standardization activities in MPEG. Free viewpoint video allows the user to freely navigate within real world visual scenes, as known from virtual worlds in computer graphics. Suitable 3D scene representation formats are classified and the processing chain is explained. Examples are shown for image-based and model-based free viewpoint video systems, highlighting standards conform realization using MPEG-4. Then the principles of 3D video are introduced providing the user with a 3D depth impression of the observed scene. Example systems are described again focusing on their realization based on MPEG-4. Finally multi-view video coding is described as a key component for 3D and free viewpoint video systems. MPEG is currently working on a new standard for multi-view video coding. The conclusion is that the necessary technology including standard media formats for 3D and free viewpoint is available or will be available in the near future, and that there is a clear demand from industry and user side for such applications. 3DTV at home and free viewpoint video on DVD will be available soon, and will create huge new markets.

  15. Virtual imaging in sports broadcasting: an overview

    NASA Astrophysics Data System (ADS)

    Tan, Yi

    2003-04-01

    Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.

  16. Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging

    NASA Technical Reports Server (NTRS)

    Kushner, Laura Kathryn; Schairer, Edward T.

    2011-01-01

    Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.

  17. Building Virtual Mars

    NASA Astrophysics Data System (ADS)

    Abercrombie, S. P.; Menzies, A.; Goddard, C.

    2017-12-01

    Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.

  18. Nursing Education Trial Using a Virtual Nightingale Ward.

    PubMed

    Tsuji, Keiko; Iwata, Naomi; Kodama, Hiromi; Hagiwara, Tomoko; Takai, Kiyako; Sasaki, Yoko; Nagata, Yoshie; Matsumoto, Maki

    2017-01-01

    Nursing department students are expected to correctly grasp the entire concept of nursing through their education. The authors created a movie of a Nightingale ward (virtual ward, hereafter) with an architectural computer design software for education. The students' reaction to the virtual ward was categorized into three viewpoints: that of nurses, of patients, and of nurses and patients in common. Most of the reactions in each viewpoint were: "easy to observe patients" in the nurses' viewpoint; "no privacy" in the patients' viewpoint; and "wide room" in the common viewpoint, respectively. These reactions show the effectiveness of using a virtual ward in nursing education. Because these reactions are characteristics of a Nightingale ward, and even students, who have generally less experiences, recognized these characteristics from the both viewpoints of nurses and patients.

  19. Virtual Images: Going Through the Looking Glass

    NASA Astrophysics Data System (ADS)

    Mota, Ana Rita; dos Santos, João Lopes

    2017-01-01

    Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is supported by the reflect-view, a useful device in geometrical optics classes because it allows a visual confrontation between virtual images and real objects that seemingly occupy the same region of space.

  20. Learning viewpoint invariant object representations using a temporal coherence principle.

    PubMed

    Einhäuser, Wolfgang; Hipp, Jörg; Eggert, Julian; Körner, Edgar; König, Peter

    2005-07-01

    Invariant object recognition is arguably one of the major challenges for contemporary machine vision systems. In contrast, the mammalian visual system performs this task virtually effortlessly. How can we exploit our knowledge on the biological system to improve artificial systems? Our understanding of the mammalian early visual system has been augmented by the discovery that general coding principles could explain many aspects of neuronal response properties. How can such schemes be transferred to system level performance? In the present study we train cells on a particular variant of the general principle of temporal coherence, the "stability" objective. These cells are trained on unlabeled real-world images without a teaching signal. We show that after training, the cells form a representation that is largely independent of the viewpoint from which the stimulus is looked at. This finding includes generalization to previously unseen viewpoints. The achieved representation is better suited for view-point invariant object classification than the cells' input patterns. This property to facilitate view-point invariant classification is maintained even if training and classification take place in the presence of an--also unlabeled--distractor object. In summary, here we show that unsupervised learning using a general coding principle facilitates the classification of real-world objects, that are not segmented from the background and undergo complex, non-isomorphic, transformations.

  1. Viewpoints on Medical Image Processing: From Science to Application

    PubMed Central

    Deserno (né Lehmann), Thomas M.; Handels, Heinz; Maier-Hein (né Fritzsche), Klaus H.; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-01-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment. PMID:24078804

  2. Viewpoints on Medical Image Processing: From Science to Application.

    PubMed

    Deserno Né Lehmann, Thomas M; Handels, Heinz; Maier-Hein Né Fritzsche, Klaus H; Mersmann, Sven; Palm, Christoph; Tolxdorff, Thomas; Wagenknecht, Gudrun; Wittenberg, Thomas

    2013-05-01

    Medical image processing provides core innovation for medical imaging. This paper is focused on recent developments from science to applications analyzing the past fifteen years of history of the proceedings of the German annual meeting on medical image processing (BVM). Furthermore, some members of the program committee present their personal points of views: (i) multi-modality for imaging and diagnosis, (ii) analysis of diffusion-weighted imaging, (iii) model-based image analysis, (iv) registration of section images, (v) from images to information in digital endoscopy, and (vi) virtual reality and robotics. Medical imaging and medical image computing is seen as field of rapid development with clear trends to integrated applications in diagnostics, treatment planning and treatment.

  3. A workout for virtual bodybuilders (design issues for embodiment in multi-actor virtual environments)

    NASA Technical Reports Server (NTRS)

    Benford, Steve; Bowers, John; Fahlen, Lennart E.; Greenhalgh, Chris; Snowdon, Dave

    1994-01-01

    This paper explores the issue of user embodiment within collaborative virtual environments. By user embodiment we mean the provision of users with appropriate body images so as to represent them to others and also to themselves. By collaborative virtual environments we mean multi-user virtual reality systems which support cooperative work (although we argue that the results of our exploration may also be applied to other kinds of collaborative systems). The main part of the paper identifies a list of embodiment design issues including: presence, location, identity, activity, availability, history of activity, viewpoint, action point, gesture, facial expression, voluntary versus involuntary expression, degree of presence, reflecting capabilities, manipulating the user's view of others, representation across multiple media, autonomous and distributed body parts, truthfulness and efficiency. Following this, we show how these issues are reflected in our own DIVE and MASSIVE prototype collaborative virtual environments.

  4. Those are Your Legs: The Effect of Visuo-Spatial Viewpoint on Visuo-Tactile Integration and Body Ownership

    PubMed Central

    Pozeg, Polona; Galli, Giulia; Blanke, Olaf

    2015-01-01

    Experiencing a body part as one’s own, i.e., body ownership, depends on the integration of multisensory bodily signals (including visual, tactile, and proprioceptive information) with the visual top-down signals from peripersonal space. Although it has been shown that the visuo-spatial viewpoint from where the body is seen is an important visual top-down factor for body ownership, different studies have reported diverging results. Furthermore, the role of visuo-spatial viewpoint (sometime also called first-person perspective) has only been studied for hands or the whole body, but not for the lower limbs. We thus investigated whether and how leg visuo-tactile integration and leg ownership depended on the visuo-spatial viewpoint from which the legs were seen and the anatomical similarity of the visual leg stimuli. Using a virtual leg illusion, we tested the strength of visuo-tactile integration of leg stimuli using the crossmodal congruency effect (CCE) as well as the subjective sense of leg ownership (assessed by a questionnaire). Fifteen participants viewed virtual legs or non-corporeal control objects, presented either from their habitual first-person viewpoint or from a viewpoint that was rotated by 90°(third-person viewpoint), while applying visuo-tactile stroking between the participants legs and the virtual legs shown on a head-mounted display. The data show that the first-person visuo-spatial viewpoint significantly boosts the visuo-tactile integration as well as the sense of leg ownership. Moreover, the viewpoint-dependent increment of the visuo-tactile integration was only found in the conditions when participants viewed the virtual legs (absent for control objects). These results confirm the importance of first person visuo-spatial viewpoint for the integration of visuo-tactile stimuli and extend findings from the upper extremity and the trunk to visuo-tactile integration and ownership for the legs. PMID:26635663

  5. Viewpoint and pose in body-form adaptation.

    PubMed

    Sekunova, Alla; Black, Michael; Parkinson, Laura; Barton, Jason J S

    2013-01-01

    Faces and bodies are complex structures, perception of which can play important roles in person identification and inference of emotional state. Face representations have been explored using behavioural adaptation: in particular, studies have shown that face aftereffects show relatively broad tuning for viewpoint, consistent with origin in a high-level structural descriptor far removed from the retinal image. Our goals were to determine first, if body aftereffects also showed a degree of viewpoint invariance, and second if they also showed pose invariance, given that changes in pose create even more dramatic changes in the 2-D retinal image. We used a 3-D model of the human body to generate headless body images, whose parameters could be varied to generate different body forms, viewpoints, and poses. In the first experiment, subjects adapted to varying viewpoints of either slim or heavy bodies in a neutral stance, followed by test stimuli that were all front-facing. In the second experiment, we used the same front-facing bodies in neutral stance as test stimuli, but compared adaptation from bodies in the same neutral stance to adaptation with the same bodies in different poses. We found that body aftereffects were obtained over substantial viewpoint changes, with no significant decline in aftereffect magnitude with increasing viewpoint difference between adapting and test images. Aftereffects also showed transfer across one change in pose but not across another. We conclude that body representations may have more viewpoint invariance than faces, and demonstrate at least some transfer across pose, consistent with a high-level structural description.

  6. Virtual Images: Going through the Looking Glass

    ERIC Educational Resources Information Center

    Mota, Ana Rita; Lopes dos Santos, João

    2017-01-01

    Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is…

  7. Learning viewpoint invariant perceptual representations from cluttered images.

    PubMed

    Spratling, Michael W

    2005-05-01

    In order to perform object recognition, it is necessary to form perceptual representations that are sufficiently specific to distinguish between objects, but that are also sufficiently flexible to generalize across changes in location, rotation, and scale. A standard method for learning perceptual representations that are invariant to viewpoint is to form temporal associations across image sequences showing object transformations. However, this method requires that individual stimuli be presented in isolation and is therefore unlikely to succeed in real-world applications where multiple objects can co-occur in the visual input. This paper proposes a simple modification to the learning method that can overcome this limitation and results in more robust learning of invariant representations.

  8. ViRPET--combination of virtual reality and PET brain imaging

    DOEpatents

    Majewski, Stanislaw; Brefczynski-Lewis, Julie

    2017-05-23

    Various methods, systems and apparatus are provided for brain imaging during virtual reality stimulation. In one example, among others, a system for virtual ambulatory environment brain imaging includes a mobile brain imager configured to obtain positron emission tomography (PET) scans of a subject in motion, and a virtual reality (VR) system configured to provide one or more stimuli to the subject during the PET scans. In another example, a method for virtual ambulatory environment brain imaging includes providing stimulation to a subject through a virtual reality (VR) system; and obtaining a positron emission tomography (PET) scan of the subject while moving in response to the stimulation from the VR system. The mobile brain imager can be positioned on the subject with an array of imaging photodetector modules distributed about the head of the subject.

  9. Implicit Learning of Viewpoint-Independent Spatial Layouts

    PubMed Central

    Tsuchiai, Taiga; Matsumiya, Kazumichi; Kuriki, Ichiro; Shioiri, Satoshi

    2012-01-01

    We usually perceive things in our surroundings as unchanged despite viewpoint changes caused by self-motion. The visual system therefore must have a function to process objects independently of viewpoint. In this study, we examined whether viewpoint-independent spatial layout can be obtained implicitly. For this purpose, we used a contextual cueing effect, a learning effect of spatial layout in visual search displays known to be an implicit effect. We investigated the transfer of the contextual cueing effect to images from a different viewpoint by using visual search displays of 3D objects. For images from a different viewpoint, the contextual cueing effect was maintained with self-motion but disappeared when the display changed without self-motion. This indicates that there is an implicit learning effect in environment-centered coordinates and suggests that the spatial representation of object layouts can be obtained and updated implicitly. We also showed that binocular disparity plays an important role in the layout representations. PMID:22740837

  10. What makes viewpoint-invariant properties perceptually salient?

    PubMed

    Jacobs, David W

    2003-07-01

    It has been noted that many of the perceptually salient image properties identified by the Gestalt psychologists, such as collinearity, parallelism, and good continuation, age invariant to changes in viewpoint. However, I show that viewpoint invariance is not sufficient to distinguish these Gestalt properties; one can define an infinite number of viewpoint-invariant properties that are not perceptually salient. I then show that generally, the perceptually salient viewpoint-invariant properties are minimal, in the sense that they can be derived by using less image information than for nonsalient properties. This finding provides support for the hypothesis that the biological relevance of an image property is determined both by the extent to which it provides information about the world and by the ease with which this property can be computed. [An abbreviated version of this work, including technical details that are avoided in this paper, is contained in K. Boyer and S. Sarker, eds., Perceptual Organization for Artificial Vision Systems (Kluwer Academic, Dordrecht, The Netherlands, 2000), pp. 121-138.

  11. Designing 3 Dimensional Virtual Reality Using Panoramic Image

    NASA Astrophysics Data System (ADS)

    Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna

    The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.

  12. Image quality characteristics for virtual monoenergetic images using dual-layer spectral detector CT: Comparison with conventional tube-voltage images.

    PubMed

    Sakabe, Daisuke; Funama, Yoshinori; Taguchi, Katsuyuki; Nakaura, Takeshi; Utsunomiya, Daisuke; Oda, Seitaro; Kidoh, Masafumi; Nagayama, Yasunori; Yamashita, Yasuyuki

    2018-05-01

    To investigate the image quality characteristics for virtual monoenergetic images compared with conventional tube-voltage image with dual-layer spectral CT (DLCT). Helical scans were performed using a first-generation DLCT scanner, two different sizes of acrylic cylindrical phantoms, and a Catphan phantom. Three different iodine concentrations were inserted into the phantom center. The single-tube voltage for obtaining virtual monoenergetic images was set to 120 or 140 kVp. Conventional 120- and 140-kVp images and virtual monoenergetic images (40-200-keV images) were reconstructed from slice thicknesses of 1.0 mm. The CT number and image noise were measured for each iodine concentration and water on the 120-kVp images and virtual monoenergetic images. The noise power spectrum (NPS) was also calculated. The iodine CT numbers for the iodinated enhancing materials were similar regardless of phantom size and acquisition method. Compared with the iodine CT numbers of the conventional 120-kVp images, those for the monoenergetic 40-, 50-, and 60-keV images increased by approximately 3.0-, 1.9-, and 1.3-fold, respectively. The image noise values for each virtual monoenergetic image were similar (for example, 24.6 HU at 40 keV and 23.3 HU at 200 keV obtained at 120 kVp and 30-cm phantom size). The NPS curves of the 70-keV and 120-kVp images for a 1.0-mm slice thickness over the entire frequency range were similar. Virtual monoenergetic images represent stable image noise over the entire energy spectrum and improved the contrast-to-noise ratio than conventional tube voltage using the dual-layer spectral detector CT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Virtual endoscopic imaging of the spine.

    PubMed

    Kotani, Toshiaki; Nagaya, Shigeyuki; Sonoda, Masaru; Akazawa, Tsutomu; Lumawig, Jose Miguel T; Nemoto, Tetsuharu; Koshi, Takana; Kamiya, Koshiro; Hirosawa, Naoya; Minami, Shohei

    2012-05-20

    Prospective trial of virtual endoscopy in spinal surgery. To investigate the utility of virtual endoscopy of the spine in conjunction with spinal surgery. Several studies have described clinical applications of virtual endoscopy to visualize the inside of the bronchi, paranasal sinus, stomach, small intestine, pancreatic duct, and bile duct, but, to date, no study has described the use of virtual endoscopy in the spine. Virtual endoscopy is a realistic 3-dimensional intraluminal simulation of tubular structures that is generated by postprocessing of computed tomographic data sets. Five patients with spinal disease were selected: 2 patients with degenerative disease, 2 patients with spinal deformity, and 1 patient with spinal injury. Virtual endoscopy software allows an observer to explore the spinal canal with a mouse, using multislice computed tomographic data. Our study found that virtual endoscopy of the spine has advantages compared with standard imaging methods because surgeons can noninvasively explore the spinal canal in all directions. Virtual endoscopy of the spine may be useful to surgeons for diagnosis, preoperative planning, and postoperative assessment by obviating the need to mentally construct a 3-dimensional picture of the spinal canal from 2-dimensional computed tomographic scans.

  14. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  15. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    PubMed

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  16. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images.

    PubMed

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S

    2016-01-01

    Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.

  17. Virtual Averaging Making Nonframe-Averaged Optical Coherence Tomography Images Comparable to Frame-Averaged Images

    PubMed Central

    Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.

    2016-01-01

    Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180

  18. Development of a virtual speaking simulator using Image Based Rendering.

    PubMed

    Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I

    2002-01-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.

  19. A practical implementation of free viewpoint video system for soccer games

    NASA Astrophysics Data System (ADS)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  20. Real and Virtual Images Using a Classroom Hologram.

    ERIC Educational Resources Information Center

    Olson, Dale W.

    1992-01-01

    Describes the design and fabrication of a classroom hologram and activities utilizing the hologram to teach the concepts of real and virtual images to high school and introductory college students. Contrasts this method with three other approaches to teach about images. (MDH)

  1. Ultrafast Synthetic Transmit Aperture Imaging Using Hadamard-Encoded Virtual Sources With Overlapping Sub-Apertures.

    PubMed

    Ping Gong; Pengfei Song; Shigao Chen

    2017-06-01

    The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.

  2. The virtual microscopy database-sharing digital microscope images for research and education.

    PubMed

    Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael

    2018-02-14

    Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.

  3. Two-photon calcium imaging in mice navigating a virtual reality environment.

    PubMed

    Leinweber, Marcus; Zmarz, Pawel; Buchmann, Peter; Argast, Paul; Hübener, Mark; Bonhoeffer, Tobias; Keller, Georg B

    2014-02-20

    In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior(1-6). Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.

  4. A Magnifying Glass for Virtual Imaging of Subwavelength Resolution by Transformation Optics.

    PubMed

    Sun, Fei; Guo, Shuwei; Liu, Yichao; He, Sailing

    2018-06-14

    Traditional magnifying glasses can give magnified virtual images with diffraction-limited resolution, that is, detailed information is lost. Here, a novel magnifying glass by transformation optics, referred to as a "superresolution magnifying glass" (SMG) is designed, which can produce magnified virtual images with a predetermined magnification factor and resolve subwavelength details (i.e., light sources with subwavelength distances can be resolved). Based on theoretical calculations and reductions, a metallic plate structure to produce the reduced SMG in microwave frequencies, which gives good performance verified by both numerical simulations and experimental results, is proposed and realized. The function of SMG is to create a superresolution virtual image, unlike traditional superresolution imaging devices that create real images. The proposed SMG will create a new branch of superresolution imaging technology. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Image-based 3D reconstruction and virtual environmental walk-through

    NASA Astrophysics Data System (ADS)

    Sun, Jifeng; Fang, Lixiong; Luo, Ying

    2001-09-01

    We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.

  6. The Multimission Image Processing Laboratory's virtual frame buffer interface

    NASA Technical Reports Server (NTRS)

    Wolfe, T.

    1984-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.

  7. Image-based path planning for automated virtual colonoscopy navigation

    NASA Astrophysics Data System (ADS)

    Hong, Wei

    2008-03-01

    Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.

  8. An integrated orthognathic surgery system for virtual planning and image-guided transfer without intermediate splint.

    PubMed

    Kim, Dae-Seung; Woo, Sang-Yoon; Yang, Hoon Joo; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon Jung; Yi, Won-Jin

    2014-12-01

    Accurate surgical planning and transfer of the planning in orthognathic surgery are very important in achieving a successful surgical outcome with appropriate improvement. Conventionally, the paper surgery is performed based on a 2D cephalometric radiograph, and the results are expressed using cast models and an articulator. We developed an integrated orthognathic surgery system with 3D virtual planning and image-guided transfer. The maxillary surgery of orthognathic patients was planned virtually, and the planning results were transferred to the cast model by image guidance. During virtual planning, the displacement of the reference points was confirmed by the displacement from conventional paper surgery at each procedure. The results of virtual surgery were transferred to the physical cast models directly through image guidance. The root mean square (RMS) difference between virtual surgery and conventional model surgery was 0.75 ± 0.51 mm for 12 patients. The RMS difference between virtual surgery and image-guidance results was 0.78 ± 0.52 mm, which showed no significant difference from the difference of conventional model surgery. The image-guided orthognathic surgery system integrated with virtual planning will replace physical model surgical planning and enable transfer of the virtual planning directly without the need for an intermediate splint. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  9. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    PubMed

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  10. Screening of a virtual mirror-image library of natural products.

    PubMed

    Noguchi, Taro; Oishi, Shinya; Honda, Kaori; Kondoh, Yasumitsu; Saito, Tamio; Ohno, Hiroaki; Osada, Hiroyuki; Fujii, Nobutaka

    2016-06-08

    We established a facile access to an unexplored mirror-image library of chiral natural product derivatives using d-protein technology. In this process, two chemical syntheses of mirror-image substances including a target protein and hit compound(s) allow the lead discovery from a virtual mirror-image library without the synthesis of numerous mirror-image compounds.

  11. Fat ViP MRI: Virtual Phantom Magnetic Resonance Imaging of water-fat systems.

    PubMed

    Salvati, Roberto; Hitti, Eric; Bellanger, Jean-Jacques; Saint-Jalmes, Hervé; Gambarota, Giulio

    2016-06-01

    Virtual Phantom Magnetic Resonance Imaging (ViP MRI) is a method to generate reference signals on MR images, using external radiofrequency (RF) signals. The aim of this study was to assess the feasibility of ViP MRI to generate complex-data images of phantoms mimicking water-fat systems. Various numerical phantoms with a given fat fraction, T2* and field map were designed. The k-space of numerical phantoms was converted into RF signals to generate virtual phantoms. MRI experiments were performed at 4.7T using a multi-gradient-echo sequence on virtual and physical phantoms. The data acquisition of virtual and physical phantoms was simultaneous. Decomposition of the water and fat signals was performed using a complex-based water-fat separation algorithm. Overall, a good agreement was observed between the fat fraction, T2* and phase map values of the virtual and numerical phantoms. In particular, fat fractions of 10.5±0.1 (vs 10% of the numerical phantom), 20.3±0.1 (vs 20%) and 30.4±0.1 (vs 30%) were obtained in virtual phantoms. The ViP MRI method allows for generating imaging phantoms that i) mimic water-fat systems and ii) can be analyzed with water-fat separation algorithms based on complex data. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  13. 2D-3D registration using gradient-based MI for image guided surgery systems

    NASA Astrophysics Data System (ADS)

    Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James

    2011-03-01

    Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.

  14. Virtual plane-wave imaging via Marchenko redatuming

    NASA Astrophysics Data System (ADS)

    Meles, Giovanni Angelo; Wapenaar, Kees; Thorbecke, Jan

    2018-04-01

    Marchenko redatuming is a novel scheme used to retrieve up- and down-going Green's functions in an unknown medium. Marchenko equations are based on reciprocity theorems and are derived on the assumption of the existence of functions exhibiting space-time focusing properties once injected in the subsurface. In contrast to interferometry but similarly to standard migration methods, Marchenko redatuming only requires an estimate of the direct wave from the virtual source (or to the virtual receiver), illumination from only one side of the medium, and no physical sources (or receivers) inside the medium. In this contribution we consider a different time-focusing condition within the frame of Marchenko redatuming that leads to the retrieval of virtual plane-wave responses. As a result, it allows multiple-free imaging using only a one-dimensional sampling of the targeted model at a fraction of the computational cost of standard Marchenko schemes. The potential of the new method is demonstrated on 2D synthetic models.

  15. Virtually distortion-free imaging system for large field, high resolution lithography

    DOEpatents

    Hawryluk, A.M.; Ceglio, N.M.

    1993-01-05

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.

  16. Virtually distortion-free imaging system for large field, high resolution lithography

    DOEpatents

    Hawryluk, Andrew M.; Ceglio, Natale M.

    1993-01-01

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.

  17. Comparison of virtual monoenergetic and polyenergetic images reconstructed from dual-layer detector CT angiography of the head and neck.

    PubMed

    Neuhaus, Victor; Große Hokamp, Nils; Abdullayev, Nuran; Maus, Volker; Kabbasch, Christoph; Mpotsaris, Anastasios; Maintz, David; Borggrefe, Jan

    2018-03-01

    To compare the image quality of virtual monoenergetic images and polyenergetic images reconstructed from dual-layer detector CT angiography (DLCTA). Thirty patients who underwent DLCTA of the head and neck were retrospectively identified and polyenergetic as well as virtual monoenergetic images (40 to 120 keV) were reconstructed. Signals (± SD) of the cervical and cerebral vessels as well as lateral pterygoid muscle and the air surrounding the head were measured to calculate the CNR and SNR. In addition, subjective image quality was assessed using a 5-point Likert scale. Student's t-test and Wilcoxon test were used to determine statistical significance. Compared to polyenergetic images, although noise increased with lower keV, CNR (p < 0.02) and SNR (p > 0.05) of the cervical, petrous and intracranial vessels were improved in virtual monoenergetic images at 40 keV and virtual monoenergetic images at 45 keV were also rated superior regarding vascular contrast, assessment of arteries close to the skull base and small arterial branches (p < 0.0001 each). Compared to polyenergetic images, virtual monoenergetic images reconstructed from DLCTA at low keV ranging from 40 to 45 keV improve the objective and subjective image quality of extra- and intracranial vessels and facilitate assessment of vessels close to the skull base and of small arterial branches. • Virtual monoenergetic images greatly improve attenuation, while noise only slightly increases. • Virtual monoenergetic images show superior contrast-to-noise ratios compared to polyenergetic images. • Virtual monoenergetic images significantly improve image quality at low keV.

  18. Solar Resource Assessment with Sky Imagery and a Virtual Testbed for Sky Imager Solar Forecasting

    NASA Astrophysics Data System (ADS)

    Kurtz, Benjamin Bernard

    In recent years, ground-based sky imagers have emerged as a promising tool for forecasting solar energy on short time scales (0 to 30 minutes ahead). Following the development of sky imager hardware and algorithms at UC San Diego, we present three new or improved algorithms for sky imager forecasting and forecast evaluation. First, we present an algorithm for measuring irradiance with a sky imager. Sky imager forecasts are often used in conjunction with other instruments for measuring irradiance, so this has the potential to decrease instrumentation costs and logistical complexity. In particular, the forecast algorithm itself often relies on knowledge of the current irradiance which can now be provided directly from the sky images. Irradiance measurements are accurate to within about 10%. Second, we demonstrate a virtual sky imager testbed that can be used for validating and enhancing the forecast algorithm. The testbed uses high-quality (but slow) simulations to produce virtual clouds and sky images. Because virtual cloud locations are known, much more advanced validation procedures are possible with the virtual testbed than with measured data. In this way, we are able to determine that camera geometry and non-uniform evolution of the cloud field are the two largest sources of forecast error. Finally, with the assistance of the virtual sky imager testbed, we develop improvements to the cloud advection model used for forecasting. The new advection schemes are 10-20% better at short time horizons.

  19. Plot of virtual surgery based on CT medical images

    NASA Astrophysics Data System (ADS)

    Song, Limei; Zhang, Chunbo

    2009-10-01

    Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.

  20. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    PubMed

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  1. Virtual Libraries: Meeting the Corporate Challenge.

    ERIC Educational Resources Information Center

    DiMattia, Susan S.; Blumenstein, Lynn C.

    1999-01-01

    Discusses virtual libraries in corporate settings from the viewpoint of five special librarians. Highlights include competitive advantage, space and related collection issues, the use of technology, corporate culture, information overload, library vulnerability and downsizing, and the importance of service over format. (LRW)

  2. Effective Replays and Summarization of Virtual Experiences

    PubMed Central

    Ponto, Kevin; Kohlmann, Joe; Gleicher, Michael

    2012-01-01

    Direct replays of the experience of a user in a virtual environment are difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the users observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewers experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewers original observations and to test for the overall effectiveness of the presented replay methods. PMID:22402688

  3. Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.

    PubMed

    Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun

    2017-11-01

    Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  4. Functional imaging of hippocampal place cells at cellular resolution during virtual navigation

    PubMed Central

    Dombeck, Daniel A.; Harvey, Christopher D.; Tian, Lin; Looger, Loren L.; Tank, David W.

    2010-01-01

    Spatial navigation is a widely employed behavior in rodent studies of neuronal circuits underlying cognition, learning and memory. In vivo microscopy combined with genetically-encoded indicators provides important new tools to study neuronal circuits, but has been technically difficult to apply during navigation. We describe methods to image the activity of hippocampal CA1 neurons with sub-cellular resolution in behaving mice. Neurons expressing the genetically encoded calcium indicator GCaMP3 were imaged through a chronic hippocampal window. Head-fixed mice performed spatial behaviors within a setup combining a virtual reality system and a custom built two-photon microscope. Populations of place cells were optically identified, and the correlation between the location of their place fields in the virtual environment and their anatomical location in the local circuit was measured. The combination of virtual reality and high-resolution functional imaging should allow for a new generation of studies to probe neuronal circuit dynamics during behavior. PMID:20890294

  5. Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.

    2014-01-01

    Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an

  6. Embodied collaboration support system for 3D shape evaluation in virtual space

    NASA Astrophysics Data System (ADS)

    Okubo, Masashi; Watanabe, Tomio

    2005-12-01

    Collaboration mainly consists of two tasks; one is each partner's task that is performed by the individual, the other is communication with each other. Both of them are very important objectives for all the collaboration support system. In this paper, a collaboration support system for 3D shape evaluation in virtual space is proposed on the basis of both studies in 3D shape evaluation and communication support in virtual space. The proposed system provides the two viewpoints for each task. One is the viewpoint of back side of user's own avatar for the smooth communication. The other is that of avatar's eye for 3D shape evaluation. Switching the viewpoints satisfies the task conditions for 3D shape evaluation and communication. The system basically consists of PC, HMD and magnetic sensors, and users can share the embodied interaction by observing interaction between their avatars in virtual space. However, the HMD and magnetic sensors, which are put on the users, would restrict the nonverbal communication. Then, we have tried to compensate the loss of nodding of partner's avatar by introducing the speech-driven embodied interactive actor InterActor. Sensory evaluation by paired comparison of 3D shapes in the collaborative situation in virtual space and in real space and the questionnaire are performed. The result demonstrates the effectiveness of InterActor's nodding in the collaborative situation.

  7. Virtual environments from panoramic images

    NASA Astrophysics Data System (ADS)

    Chapman, David P.; Deacon, Andrew

    1998-12-01

    A number of recent projects have demonstrated the utility of Internet-enabled image databases for the documentation of complex, inaccessible and potentially hazardous environments typically encountered in the petrochemical and nuclear industries. Unfortunately machine vision and image processing techniques have not, to date, enabled the automatic extraction geometrical data from such images and thus 3D CAD modeling remains an expensive and laborious manual activity. Recent developments in panoramic image capture and presentation offer an alternative intermediate deliverable which, in turn, offers some of the benefits of a 3D model at a fraction of the cost. Panoramic image display tools such as Apple's QuickTime VR (QTVR) and Live Spaces RealVR provide compelling and accessible digital representations of the real world and justifiably claim to 'put the reality in Virtual Reality.' This paper will demonstrate how such technologies can be customized, extended and linked to facility management systems delivered over a corporate intra-net to enable end users to become familiar with remote sites and extract simple dimensional data. In addition strategies for the integration of such images with documents gathered from 2D or 3D CAD and Process and Instrumentation Diagrams (P&IDs) will be described as will techniques for precise 'As-Built' modeling using the calibrated images from which panoramas have been derived and the use of textures from these images to increase the realism of rendered scenes. A number of case studies relating to both nuclear and process engineering will demonstrate the extent to which such solution are scaleable in order to deal with the very large volumes of image data required to fully document the large, complex facilities typical of these industry sectors.

  8. Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)

    2003-01-01

    A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.

  9. Two-photon calcium imaging during fictive navigation in virtual environments.

    PubMed

    Ahrens, Misha B; Huang, Kuo Hua; Narayan, Sujatha; Mensh, Brett D; Engert, Florian

    2013-01-01

    A full understanding of nervous system function requires recording from large populations of neurons during naturalistic behaviors. Here we enable paralyzed larval zebrafish to fictively navigate two-dimensional virtual environments while we record optically from many neurons with two-photon imaging. Electrical recordings from motor nerves in the tail are decoded into intended forward swims and turns, which are used to update a virtual environment displayed underneath the fish. Several behavioral features-such as turning responses to whole-field motion and dark avoidance-are well-replicated in this virtual setting. We readily observed neuronal populations in the hindbrain with laterally selective responses that correlated with right or left optomotor behavior. We also observed neurons in the habenula, pallium, and midbrain with response properties specific to environmental features. Beyond single-cell correlations, the classification of network activity in such virtual settings promises to reveal principles of brainwide neural dynamics during behavior.

  10. Face recognition based on symmetrical virtual image and original training image

    NASA Astrophysics Data System (ADS)

    Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao

    2018-02-01

    In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.

  11. Virtual monochromatic imaging in dual-source and dual-energy CT for visualization of acute ischemic stroke

    NASA Astrophysics Data System (ADS)

    Hara, Hidetake; Muraishi, Hiroshi; Matsuzawa, Hiroki; Inoue, Toshiyuki; Nakajima, Yasuo; Satoh, Hitoshi; Abe, Shinji

    2015-07-01

    We have recently developed a phantom that simulates acute ischemic stroke. We attempted to visualize an acute-stage cerebral infarction by using dual-energy Computed tomography (DECT) to obtain virtual monochromatic images of this phantom. Virtual monochromatic images were created by using DECT voltages from 40 to 100 keV in steps of 10 keV and from 60 to 80 keV in steps of 1 keV, under three conditions of the tube voltage with thin (Sn) filters. Calculation of the CNR values allowed us to evaluate the visualization of acute-stage cerebral infarction. The CNR value of a virtual monochromatic image was the highest at 68 keV under 80 kV / Sn 140 kV, at 72 keV under 100 kV / Sn 140 kV, and at 67 keV under 140 kV / 80 kV. The CNR values of virtual monochromatic images at voltages between 65 and 75 keV were significantly higher than those obtained for all other created images. Therefore, the optimal conditions for visualizing acute ischemic stroke were achievable.

  12. Photorealistic scene presentation: virtual video camera

    NASA Astrophysics Data System (ADS)

    Johnson, Michael J.; Rogers, Joel Clark W.

    1994-07-01

    This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.

  13. Two-photon calcium imaging during fictive navigation in virtual environments

    PubMed Central

    Ahrens, Misha B.; Huang, Kuo Hua; Narayan, Sujatha; Mensh, Brett D.; Engert, Florian

    2013-01-01

    A full understanding of nervous system function requires recording from large populations of neurons during naturalistic behaviors. Here we enable paralyzed larval zebrafish to fictively navigate two-dimensional virtual environments while we record optically from many neurons with two-photon imaging. Electrical recordings from motor nerves in the tail are decoded into intended forward swims and turns, which are used to update a virtual environment displayed underneath the fish. Several behavioral features—such as turning responses to whole-field motion and dark avoidance—are well-replicated in this virtual setting. We readily observed neuronal populations in the hindbrain with laterally selective responses that correlated with right or left optomotor behavior. We also observed neurons in the habenula, pallium, and midbrain with response properties specific to environmental features. Beyond single-cell correlations, the classification of network activity in such virtual settings promises to reveal principles of brainwide neural dynamics during behavior. PMID:23761738

  14. V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis

    NASA Astrophysics Data System (ADS)

    Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.

    2011-09-01

    In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.

  15. Using a visual discrimination model for the detection of compression artifacts in virtual pathology images.

    PubMed

    Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S

    2011-02-01

    A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.

  16. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  17. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    PubMed

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  18. PROGRESS AND PROBLEMS FROM THE SOCIAL SCIENTIST'S VIEWPOINT.

    ERIC Educational Resources Information Center

    CLARK, KENNETH

    REVIEWED FROM A SOCIAL SCIENTIST'S VIEWPOINT IS THE EFFECT OF THE SUPREME COURT'S 1954 BROWN DECISION ON PATTERNS OF DEFACTO SEGREGATION IN NORTHERN COMMUNITIES. THE DECISION HAD PROFOUND EFFECTS ON DE FACTO SEGREGATION, PARTICULARLY IN RELATION TO THE DEMOCRATIC IDEALS OF EQUALITY AND TO THE DAMAGED SELF-IMAGE CREATED BY SEGREGATED SCHOOLS. IT…

  19. C-arm positioning using virtual fluoroscopy for image-guided surgery

    NASA Astrophysics Data System (ADS)

    de Silva, T.; Punnoose, J.; Uneri, A.; Goerres, J.; Jacobson, M.; Ketcha, M. D.; Manbachi, A.; Vogt, S.; Kleinszig, G.; Khanna, A. J.; Wolinsky, J.-P.; Osgood, G.; Siewerdsen, J. H.

    2017-03-01

    Introduction: Fluoroscopically guided procedures often involve repeated acquisitions for C-arm positioning at the cost of radiation exposure and time in the operating room. A virtual fluoroscopy system is reported with the potential of reducing dose and time spent in C-arm positioning, utilizing three key advances: robust 3D-2D registration to a preoperative CT; real-time forward projection on GPU; and a motorized mobile C-arm with encoder feedback on C-arm orientation. Method: Geometric calibration of the C-arm was performed offline in two rotational directions (orbit α, orbit β). Patient registration was performed using image-based 3D-2D registration with an initially acquired radiograph of the patient. This approach for patient registration eliminated the requirement for external tracking devices inside the operating room, allowing virtual fluoroscopy using commonly available systems in fluoroscopically guided procedures within standard surgical workflow. Geometric accuracy was evaluated in terms of projection distance error (PDE) in anatomical fiducials. A pilot study was conducted to evaluate the utility of virtual fluoroscopy to aid C-arm positioning in image guided surgery, assessing potential improvements in time, dose, and agreement between the virtual and desired view. Results: The overall geometric accuracy of DRRs in comparison to the actual radiographs at various C-arm positions was PDE (mean ± std) = 1.6 ± 1.1 mm. The conventional approach required on average 8.0 ± 4.5 radiographs spent "fluoro hunting" to obtain the desired view. Positioning accuracy improved from 2.6o ± 2.3o (in α) and 4.1o ± 5.1o (in β) in the conventional approach to 1.5o ± 1.3o and 1.8o ± 1.7o, respectively, with the virtual fluoroscopy approach. Conclusion: Virtual fluoroscopy could improve accuracy of C-arm positioning and save time and radiation dose in the operating room. Such a system could be valuable to training of fluoroscopy technicians as well as

  20. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  1. Periprosthetic Artifact Reduction Using Virtual Monochromatic Imaging Derived From Gemstone Dual-Energy Computed Tomography and Dedicated Software.

    PubMed

    Reynoso, Exequiel; Capunay, Carlos; Rasumoff, Alejandro; Vallejos, Javier; Carpio, Jimena; Lago, Karen; Carrascosa, Patricia

    2016-01-01

    The aim of this study was to explore the usefulness of combined virtual monochromatic imaging and metal artifact reduction software (MARS) for the evaluation of musculoskeletal periprosthetic tissue. Measurements were performed in periprosthetic and remote regions in 80 patients using a high-definition scanner. Polychromatic images with and without MARS and virtual monochromatic images were obtained. Periprosthetic polychromatic imaging (PI) showed significant differences compared with remote areas among the 3 tissues explored (P < 0.0001). No significant differences were observed between periprosthetic and remote tissues using monochromatic imaging with MARS (P = 0.053 bone, P = 0.32 soft tissue, and P = 0.13 fat). However, such differences were significant using PI with MARS among bone (P = 0.005) and fat (P = 0.02) tissues. All periprosthetic areas were noninterpretable using PI, compared with 11 (9%) using monochromatic imaging. The combined use of virtual monochromatic imaging and MARS reduced periprosthetic artifacts, achieving attenuation levels comparable to implant-free tissue.

  2. A Virtual Out-of-Body Experience Reduces Fear of Death

    PubMed Central

    2017-01-01

    Immersive virtual reality can be used to visually substitute a person’s real body by a life-sized virtual body (VB) that is seen from first person perspective. Using real-time motion capture the VB can be programmed to move synchronously with the real body (visuomotor synchrony), and also virtual objects seen to strike the VB can be felt through corresponding vibrotactile stimulation on the actual body (visuotactile synchrony). This setup typically gives rise to a strong perceptual illusion of ownership over the VB. When the viewpoint is lifted up and out of the VB so that it is seen below this may result in an out-of-body experience (OBE). In a two-factor between-groups experiment with 16 female participants per group we tested how fear of death might be influenced by two different methods for producing an OBE. In an initial embodiment phase where both groups experienced the same multisensory stimuli there was a strong feeling of body ownership. Then the viewpoint was lifted up and behind the VB. In the experimental group once the viewpoint was out of the VB there was no further connection with it (no visuomotor or visuotactile synchrony). In a control condition, although the viewpoint was in the identical place as in the experimental group, visuomotor and visuotactile synchrony continued. While both groups reported high scores on a question about their OBE illusion, the experimental group had a greater feeling of disownership towards the VB below compared to the control group, in line with previous findings. Fear of death in the experimental group was found to be lower than in the control group. This is in line with previous reports that naturally occurring OBEs are often associated with enhanced belief in life after death. PMID:28068368

  3. Designing Virtual Museum Using Web3D Technology

    NASA Astrophysics Data System (ADS)

    Zhao, Jianghai

    VRT was born to have the potentiality of constructing an effective learning environment due to its 3I characteristics: Interaction, Immersion and Imagination. It is now applied in education in a more profound way along with the development of VRT. Virtual Museum is one of the applications. The Virtual Museum is based on the WEB3D technology and extensibility is the most important factor. Considering the advantage and disadvantage of each WEB3D technology, VRML, CULT3D AND VIEWPOINT technologies are chosen. A web chatroom based on flash and ASP technology is also been created in order to make the Virtual Museum an interactive learning environment.

  4. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences

    PubMed Central

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099

  5. BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.

    PubMed

    Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola

    2015-01-01

    Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.

  6. Combining color and shape information for illumination-viewpoint invariant object recognition.

    PubMed

    Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis

    2006-01-01

    In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.

  7. Relationships between Sensory Stimuli and Autonomic Regulation During Real and Virtual Exercises.

    PubMed

    Kiryu, Tohru; Iijima, Atsuhiko; Bando, Takehiko

    2005-01-01

    For expanding application of virtual reality, such as rehabilitation engineering, concerns of cybersicknes should be cleared. We have investigated changes in autonomic regulations under real cycling and virtual mountain biking video with the first-person viewpoint. The results showed that the dominant sensory stimuli affected autonomic regulation with different process. The different process will lead to the hints for preventing cybersickness.

  8. [Image fusion, virtual reality, robotics and navigation. Effects on surgical practice].

    PubMed

    Maresceaux, J; Soler, L; Ceulemans, R; Garcia, A; Henri, M; Dutson, E

    2002-05-01

    In the new minimally invasive surgical era, virtual reality, robotics, and image merging have become topics on their own, offering the potential to revolutionize current surgical treatment and assessment. Improved patient care in the digital age seems to be the primary impetus for continued efforts in the field of telesurgery. The progress in endoscopic surgery with regard to telesurgery is manifested by digitization of the pre-, intra-, and postoperative interaction with the patients' surgical disease via computer system integration: so-called Computer Assisted Surgery (CAS). The preoperative assessment can be improved by 3D organ reconstruction, as in virtual colonoscopy or cholangiography, and by planning and practicing surgery using virtual or simulated organs. When integrating all of the data recorded during this preoperative stage, an enhanced reality can be made possible to improve intra-operative patient interactions. CAS allows for increased three-dimensional accuracy, improved precision and the reproducibility of procedures. The ability to store the actions of the surgeon as digitized information also allows for universal, rapid distribution: i.e., the surgeon's activity can be transmitted to the other side of the operating room or to a remote site via high-speed communications links, as was recently demonstrated by our own team during the Lindbergh operation. Furthermore, the surgeon will be able to share his expertise and skill through teleconsultation and telemanipulation, bringing the patient closer to the expert surgical team through electronic means and opening the way to advanced and continuous surgical learning. Finally, for postoperative interaction, virtual reality and simulation can provide us with 4 dimensional images, time being the fourth dimension. This should allow physicians to have a better idea of the disease process in evolution, and treatment modifications based on this view can be anticipated. We are presently determining the

  9. Development and application of virtual reality for man/systems integration

    NASA Technical Reports Server (NTRS)

    Brown, Marcus

    1991-01-01

    While the graphical presentation of computer models signified a quantum leap over presentations limited to text and numbers, it still has the problem of presenting an interface barrier between the human user and the computer model. The user must learn a command language in order to orient themselves in the model. For example, to move left from the current viewpoint of the model, they might be required to type 'LEFT' at a keyboard. This command is fairly intuitive, but if the viewpoint moves far enough that there are no visual cues overlapping with the first view, the user does not know if the viewpoint has moved inches, feet, or miles to the left, or perhaps remained in the same position, but rotated to the left. Until the user becomes quite familiar with the interface language of the computer model presentation, they will be proned to lossing their bearings frequently. Even a highly skilled user will occasionally get lost in the model. A new approach to presenting type type of information is to directly interpret the user's body motions as the input language for determining what view to present. When the user's head turns 45 degrees to the left, the viewpoint should be rotated 45 degrees to the left. Since the head moves through several intermediate angles between the original view and the final one, several intermediate views should be presented, providing the user with a sense of continuity between the original view and the final one. Since the primary way a human physically interacts with their environment should monitor the movements of the user's hands and alter objects in the virtual model in a way consistent with the way an actual object would move when manipulated using the same hand movements. Since this approach to the man-computer interface closely models the same type of interface that humans have with the physical world, this type of interface is often called virtual reality, and the model is referred to as a virtual world. The task of this summer

  10. Full-color high-definition CGH reconstructing hybrid scenes of physical and virtual objects

    NASA Astrophysics Data System (ADS)

    Tsuchiyama, Yasuhiro; Matsushima, Kyoji; Nakahara, Sumio; Yamaguchi, Masahiro; Sakamoto, Yuji

    2017-03-01

    High-definition CGHs can reconstruct high-quality 3D images that are comparable to that in conventional optical holography. However, it was difficult to exhibit full-color images reconstructed by these high-definition CGHs, because three CGHs for RGB colors and a bulky image combiner were needed to produce full-color images. Recently, we reported a novel technique for full-color reconstruction using RGB color filters, which are similar to that used for liquid-crystal panels. This technique allows us to produce full-color high-definition CGHs composed of a single plate and place them on exhibition. By using the technique, we demonstrate full-color CGHs that reconstruct hybrid scenes comprised of real-existing physical objects and CG-modeled virtual objects in this paper. Here, the wave field of the physical object are obtained from dense multi-viewpoint images by employing the ray-sampling (RS) plane technique. In addition to the technique for full-color capturing and reconstruction of real object fields, the principle and simulation technique for full- color CGHs using RGB color filters are presented.

  11. SU-F-P-18: Development of the Technical Training System for Patient Set-Up Considering Rotational Correction in the Virtual Environment Using Three-Dimensional Computer Graphic Engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Imura, K; Fujibuchi, T; Hirata, H

    Purpose: Patient set-up skills in radiotherapy treatment room have a great influence on treatment effect for image guided radiotherapy. In this study, we have developed the training system for improving practical set-up skills considering rotational correction in the virtual environment away from the pressure of actual treatment room by using three-dimensional computer graphic (3DCG) engine. Methods: The treatment room for external beam radiotherapy was reproduced in the virtual environment by using 3DCG engine (Unity). The viewpoints to perform patient set-up in the virtual treatment room were arranged in both sides of the virtual operable treatment couch to assume actual performancemore » by two clinical staffs. The position errors to mechanical isocenter considering alignment between skin marker and laser on the virtual patient model were displayed by utilizing numerical values expressed in SI units and the directions of arrow marks. The rotational errors calculated with a point on the virtual body axis as the center of each rotation axis for the virtual environment were corrected by adjusting rotational position of the body phantom wound the belt with gyroscope preparing on table in a real space. These rotational errors were evaluated by describing vector outer product operations and trigonometric functions in the script for patient set-up technique. Results: The viewpoints in the virtual environment allowed individual user to visually recognize the position discrepancy to mechanical isocenter until eliminating the positional errors of several millimeters. The rotational errors between the two points calculated with the center point could be efficiently corrected to display the minimum technique mathematically by utilizing the script. Conclusion: By utilizing the script to correct the rotational errors as well as accurate positional recognition for patient set-up technique, the training system developed for improving patient set-up skills enabled individual

  12. Virtual reality system for treatment of the fear of public speaking using image-based rendering and moving pictures.

    PubMed

    Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I

    2002-06-01

    The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.

  13. 3D shape recovery of smooth surfaces: dropping the fixed-viewpoint assumption.

    PubMed

    Moses, Yael; Shimshoni, Ilan

    2009-07-01

    We present a new method for recovering the 3D shape of a featureless smooth surface from three or more calibrated images illuminated by different light sources (three of them are independent). This method is unique in its ability to handle images taken from unconstrained perspective viewpoints and unconstrained illumination directions. The correspondence between such images is hard to compute and no other known method can handle this problem locally from a small number of images. Our method combines geometric and photometric information in order to recover dense correspondence between the images and accurately computes the 3D shape. Only a single pass starting at one point and local computation are used. This is in contrast to methods that use the occluding contours recovered from many images to initialize and constrain an optimization process. The output of our method can be used to initialize such processes. In the special case of fixed viewpoint, the proposed method becomes a new perspective photometric stereo algorithm. Nevertheless, the introduction of the multiview setup, self-occlusions, and regions close to the occluding boundaries are better handled, and the method is more robust to noise than photometric stereo. Experimental results are presented for simulated and real images.

  14. A virtual image chain for perceived image quality of medical display

    NASA Astrophysics Data System (ADS)

    Marchessoux, Cédric; Jung, Jürgen

    2006-03-01

    This paper describes a virtual image chain for medical display (project VICTOR: granted in the 5th framework program by European commission). The chain starts from raw data of an image digitizer (CR, DR) or synthetic patterns and covers image enhancement (MUSICA by Agfa) and both display possibilities, hardcopy (film on viewing box) and softcopy (monitor). Key feature of the chain is a complete image wise approach. A first prototype is implemented in an object-oriented software platform. The display chain consists of several modules. Raw images are either taken from scanners (CR-DR) or from a pattern generator, in which characteristics of DR- CR systems are introduced by their MTF and their dose-dependent Poisson noise. The image undergoes image enhancement and comes to display. For soft display, color and monochrome monitors are used in the simulation. The image is down-sampled. The non-linear response of a color monitor is taken into account by the GOG or S-curve model, whereas the Standard Gray-Scale-Display-Function (DICOM) is used for monochrome display. The MTF of the monitor is applied on the image in intensity levels. For hardcopy display, the combination of film, printer, lightbox and viewing condition is modeled. The image is up-sampled and the DICOM-GSDF or a Kanamori Look-Up-Table is applied. An anisotropic model for the MTF of the printer is applied on the image in intensity levels. The density-dependent color (XYZ) of the hardcopy film is introduced by Look-Up-tables. Finally a Human Visual System Model is applied to the intensity images (XYZ in terms of cd/m2) in order to eliminate nonvisible differences. Comparison leads to visible differences, which are quantified by higher order image quality metrics. A specific image viewer is used for the visualization of the intensity image and the visual difference maps.

  15. A Virtual Reality System for PTCD Simulation Using Direct Visuo-Haptic Rendering of Partially Segmented Image Data.

    PubMed

    Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz

    2016-01-01

    This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.

  16. fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media

    NASA Astrophysics Data System (ADS)

    Yoshida, Shunsuke

    2012-06-01

    A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.

  17. Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality.

    PubMed

    Galvan Debarba, Henrique; Bovet, Sidney; Salomon, Roy; Blanke, Olaf; Herbelin, Bruno; Boulic, Ronan

    2017-01-01

    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences.

  18. Characterizing first and third person viewpoints and their alternation for embodied interaction in virtual reality

    PubMed Central

    Bovet, Sidney; Salomon, Roy; Blanke, Olaf; Herbelin, Bruno; Boulic, Ronan

    2017-01-01

    Empirical research on the bodily self has shown that the body representation is malleable, and prone to manipulation when conflicting sensory stimuli are presented. Using Virtual Reality (VR) we assessed the effects of manipulating multisensory feedback (full body control and visuo-tactile congruence) and visual perspective (first and third person perspective) on the sense of embodying a virtual body that was exposed to a virtual threat. We also investigated how subjects behave when the possibility of alternating between first and third person perspective at will was presented. Our results support that illusory ownership of a virtual body can be achieved in both first and third person perspectives under congruent visuo-motor-tactile condition. However, subjective body ownership and reaction to threat were generally stronger for first person perspective and alternating condition than for third person perspective. This suggests that the possibility of alternating perspective is compatible with a strong sense of embodiment, which is meaningful for the design of new embodied VR experiences. PMID:29281736

  19. Virtual medicine: Utilization of the advanced cardiac imaging patient avatar for procedural planning and facilitation.

    PubMed

    Shinbane, Jerold S; Saxon, Leslie A

    Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality

  20. Motion facilitates face perception across changes in viewpoint and expression in older adults.

    PubMed

    Maguinness, Corrina; Newell, Fiona N

    2014-12-01

    Faces are inherently dynamic stimuli. However, face perception in younger adults appears to be mediated by the ability to extract structural cues from static images and a benefit of motion is inconsistent. In contrast, static face processing is poorer and more image-dependent in older adults. We therefore compared the role of facial motion in younger and older adults to assess whether motion can enhance perception when static cues are insufficient. In our studies, older and younger adults learned faces presented in motion or in a sequence of static images, containing rigid (viewpoint) or nonrigid (expression) changes. Immediately following learning, participants matched a static test image to the learned face which varied by viewpoint (Experiment 1) or expression (Experiment 2) and was either learned or novel. First, we found an age effect with better face matching performance in younger than in older adults. However, we observed face matching performance improved in the older adult group, across changes in viewpoint and expression, when faces were learned in motion relative to static presentation. There was no benefit for facial (nonrigid) motion when the task involved matching inverted faces (Experiment 3), suggesting that the ability to use dynamic face information for the purpose of recognition reflects motion encoding which is specific to upright faces. Our results suggest that ageing may offer a unique insight into how dynamic cues support face processing, which may not be readily observed in younger adults' performance. (PsycINFO Database Record (c) 2014 APA, all rights reserved).

  1. Virtual performer: single camera 3D measuring system for interaction in virtual space

    NASA Astrophysics Data System (ADS)

    Sakamoto, Kunio; Taneji, Shoto

    2006-10-01

    The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.

  2. Virtual reality for spherical images

    NASA Astrophysics Data System (ADS)

    Pilarczyk, Rafal; Skarbek, Władysław

    2017-08-01

    Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.

  3. Analysis and design of a refractive virtual image system

    NASA Technical Reports Server (NTRS)

    Kahlbaum, W. M.

    1977-01-01

    The optical performance of a virtual image display system is evaluated. Observation of a two-element (unachromatized doublet) refractive system led to the conclusion that the major source of image degradation was lateral chromatic aberration. This conclusion was verified by computer analysis of the system. The lateral chromatic aberration is given in terms of the resolution of the phosphor dots on a standard shadow mask color cathode ray tube. Single wavelength considerations include: astigmatism, apparent image distance from the observer, binocular disparities and differences of angular magnification of the images presented to each of the observer's eyes. Where practical, these results are related to the performance of the human eye. All these techniques are applied to the previously mentioned doublet and a triplet refractive system. The triplet provides a 50-percent reduction in lateral chromatic aberration which was the design goal. Distortion was also reduced to a minimum over the field of view. The methods used in the design of the triplet are presented along with a method of relating classical aberration curves to image distance and binocular disparity.

  4. Three-dimensional thermographic imaging using a virtual wave concept

    NASA Astrophysics Data System (ADS)

    Burgholzer, Peter; Thor, Michael; Gruber, Jürgen; Mayr, Günther

    2017-03-01

    In this work, it is shown that image reconstruction methods from ultrasonic imaging can be employed for thermographic signals. Before using these imaging methods, a virtual signal is calculated by applying a local transformation to the temperature evolution measured on a sample surface. The introduced transformation describes all the irreversibility of the heat diffusion process and can be used for every sample shape. To date, one-dimensional methods have been primarily used in thermographic imaging. The proposed two-stage algorithm enables reconstruction in two and three dimensions. The feasibility of this approach is demonstrated through simulations and experiments. For the latter, small steel beads embedded in an epoxy resin are imaged. The resolution limit is found to be proportional to the depth of the structures and to be inversely proportional to the logarithm of the signal-to-noise ratio. Limited-view artefacts can arise if the measurement is performed on a single planar detection surface. These artifacts can be reduced by measuring the thermographic signals from multiple planes, which is demonstrated by numerical simulations and by experiments performed on an epoxy cube.

  5. Effects of magnification and visual accommodation on aimpoint estimation in simulated landings with real and virtual image displays

    NASA Technical Reports Server (NTRS)

    Randle, R. J.; Roscoe, S. N.; Petitt, J. C.

    1980-01-01

    Twenty professional pilots observed a computer-generated airport scene during simulated autopilot-coupled night landing approaches and at two points (20 sec and 10 sec before touchdown) judged whether the airplane would undershoot or overshoot the aimpoint. Visual accommodation was continuously measured using an automatic infrared optometer. Experimental variables included approach slope angle, display magnification, visual focus demand (using ophthalmic lenses), and presentation of the display as either a real (direct view) or a virtual (collimated) image. Aimpoint judgments shifted predictably with actual approach slope and display magnification. Both pilot judgments and measured accommodation interacted with focus demand with real-image displays but not with virtual-image displays. With either type of display, measured accommodation lagged far behind focus demand and was reliably less responsive to the virtual images. Pilot judgments shifted dramatically from an overwhelming perceived-overshoot bias 20 sec before touchdown to a reliable undershoot bias 10 sec later.

  6. Virtual phantom magnetic resonance imaging (ViP MRI) on a clinical MRI platform.

    PubMed

    Saint-Jalmes, Hervé; Bordelois, Alejandro; Gambarota, Giulio

    2018-01-01

    The purpose of this study was to implement Virtual Phantom Magnetic Resonance Imaging (ViP MRI), a technique that allows for generating reference signals in MR images using radiofrequency (RF) signals, on a clinical MR system and to test newly designed virtual phantoms. MRI experiments were conducted on a 1.5 T MRI scanner. Electromagnetic modelling of the ViP system was done using the principle of reciprocity. The ViP RF signals were generated using a compact waveform generator (dimensions of 26 cm × 18 cm × 16 cm), connected to a homebuilt 25 mm-diameter RF coil. The ViP RF signals were transmitted to the MRI scanner bore, simultaneously with the acquisition of the signal from the object of interest. Different types of MRI data acquisition (2D and 3D gradient-echo) as well as different phantoms, including the Shepp-Logan phantom, were tested. Furthermore, a uniquely designed virtual phantom - in the shape of a grid - was generated; this newly proposed phantom allows for the investigations of the vendor distortion correction field. High quality MR images of virtual phantoms were obtained. An excellent agreement was found between the experimental data and the inverse cube law, which was the expected functional dependence obtained from the electromagnetic modelling of the ViP system. Short-term time stability measurements yielded a coefficient of variation in the signal intensity over time equal to 0.23% and 0.13% for virtual and physical phantom, respectively. MR images of the virtual grid-shaped phantom were reconstructed with the vendor distortion correction; this allowed for a direct visualization of the vendor distortion correction field. Furthermore, as expected from the electromagnetic modelling of the ViP system, a very compact coil (diameter ~ cm) and very small currents (intensity ~ mA) were sufficient to generate a signal comparable to that of physical phantoms in MRI experiments. The ViP MRI technique was successfully implemented on a clinical MR

  7. Developing Students' Ideas about Lens Imaging: Teaching Experiments with an Image-Based Approach

    ERIC Educational Resources Information Center

    Grusche, Sascha

    2017-01-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists' analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students' ideas, teaching experiments are performed and evaluated using…

  8. Quality improving techniques for free-viewpoint DIBR

    NASA Astrophysics Data System (ADS)

    Do, Luat; Zinger, Sveta; de With, Peter H. N.

    2010-02-01

    Interactive free-viewpoint selection applied to a 3D multi-view signal is a possible attractive feature of the rapidly developing 3D TV media. This paper explores a new rendering algorithm that computes a free-viewpoint based on depth image warping between two reference views from existing cameras. We have developed three quality enhancing techniques that specifically aim at solving the major artifacts. First, resampling artifacts are filled in by a combination of median filtering and inverse warping. Second, contour artifacts are processed while omitting warping of edges at high discontinuities. Third, we employ a depth signal for more accurate disocclusion inpainting. We obtain an average PSNR gain of 3 dB and 4.5 dB for the 'Breakdancers' and 'Ballet' sequences, respectively, compared to recently published results. While experimenting with synthetic data, we observe that the rendering quality is highly dependent on the complexity of the scene. Moreover, experiments are performed using compressed video from surrounding cameras. The overall system quality is dominated by the rendering quality and not by coding.

  9. Enabling Histopathological Annotations on Immunofluorescent Images through Virtualization of Hematoxylin and Eosin

    PubMed Central

    Lahiani, Amal; Klaiman, Eldad; Grimm, Oliver

    2018-01-01

    Context: Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E) as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. Aims, Settings and Design: To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. Subjects and Methods: In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Statistical Analysis Used: Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. Results: It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Conclusions: Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures based on

  10. Enabling Histopathological Annotations on Immunofluorescent Images through Virtualization of Hematoxylin and Eosin.

    PubMed

    Lahiani, Amal; Klaiman, Eldad; Grimm, Oliver

    2018-01-01

    Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E) as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures based on multiplexed fluorescence images.

  11. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    NASA Astrophysics Data System (ADS)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-09-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and

  12. Techniques for virtual lung nodule insertion: volumetric and morphometric comparison of projection-based and image-based methods for quantitative CT

    PubMed Central

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan

    2017-01-01

    Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules

  13. Psychophysical estimation of 3D virtual depth of united, synthesized and mixed type stereograms by means of simultaneous observation

    NASA Astrophysics Data System (ADS)

    Iizuka, Masayuki; Ookuma, Yoshio; Nakashima, Yoshio; Takamatsu, Mamoru

    2007-02-01

    Recently, many types of computer-generated stereograms (CGSs), i.e. various works of art produced by using computer are published for hobby and entertainment. It is said that activation of brain, improvement of visual eye sight, decrease of mental stress, effect of healing, etc. are expected when properly appreciating a kind of CGS as the stereoscopic view. There is a lot of information on the internet web site concerning all aspects of stereogram history, science, social organization, various types of stereograms, and free software for generating CGS. Generally, the CGS is classified into nine types: (1) stereo pair type, (2) anaglyph type, (3) repeated pattern type, (4) embedded type, (5) random dot stereogram (RDS), (6) single image stereogram (SIS), (7) united stereogram, (8) synthesized stereogram, and (9) mixed or multiple type stereogram. Each stereogram has advantages and disadvantages when viewing directly the stereogram with two eyes by training with a little patience. In this study, the characteristics of united, synthesized and mixed type stereograms, the role and composition of depth map image (DMI) called hidden image or picture, and the effect of irregular shift of texture pattern image called wall paper are discussed from the viewpoint of psychophysical estimation of 3D virtual depth and visual quality of virtual image by means of simultaneous observation in the case of the parallel viewing method.

  14. Virtual reality for automotive design evaluation

    NASA Technical Reports Server (NTRS)

    Dodd, George G.

    1995-01-01

    A general description of Virtual Reality technology and possible applications was given from publicly available material. A video tape was shown demonstrating the use of multiple large-screen stereoscopic displays, configured in a 10' x 10' x 10' room, to allow a person to evaluate and interact with a vehicle which exists only as mathematical data, and is made only of light. The correct viewpoint of the vehicle is maintained by tracking special glasses worn by the subject. Interior illumination was changed by moving a virtual light around by hand; interior colors are changed by pointing at a color on a color palette, then pointing at the desired surface to change. We concluded by discussing research needed to move this technology forward.

  15. Image processing, geometric modeling and data management for development of a virtual bone surgery system.

    PubMed

    Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge

    2008-01-01

    This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.

  16. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  17. Low loss jammed-array wideband sawtooth filter based on a finite reflection virtually imaged array

    NASA Astrophysics Data System (ADS)

    Tan, Zhongwei; Cao, Dandan; Ding, Zhichao

    2018-03-01

    An edge filter is a potential technology in the fiber Bragg grating interrogation that has the advantages of fast response speed and suitability for dynamic measurement. To build a low loss, wideband jammed-array wideband sawtooth (JAWS) filter, a finite reflection virtually imaged array (FRVIA) is proposed and demonstrated. FRVIA is different from the virtually imaged phased array in that it has a low reflective front end. This change will lead to many differences in the device's performance in output optical intensity distribution, spectral resolution, output aperture, and tolerance of the manufacture errors. A low loss, wideband JAWS filter based on an FRVIA can provide an edge filter for each channel, respectively.

  18. Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images

    PubMed Central

    Izquierdo, Alberto; Suárez, Luis; Suárez, David

    2017-01-01

    Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485

  19. Free viewpoint TV and its international standardization

    NASA Astrophysics Data System (ADS)

    Tanimoto, Masayuki

    2009-05-01

    We have developed a new type of television named FTV (Free-viewpoint TV). FTV is an innovative visual media that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. We also realized FTV on a single PC and FTV with free listening-point audio. FTV is based on the ray-space method that represents one ray in real space with one point in the ray-space. We have also developed new type of ray capture and display technologies such as a 360-degree mirror-scan ray capturing system and a 360 degree ray-reproducing display. MPEG regarded FTV as the most challenging 3D media and started the international standardization activities of FTV. The first phase of FTV is MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in March 2009. 3DV is a standard that targets serving a variety of 3D displays. It will be completed within the next two years.

  20. Virtual angioscopic visualization and analysis of coronary aneurysms using intravascular ultrasound images

    NASA Astrophysics Data System (ADS)

    Ayeni, Tina A.; Holmes, David R., III; Robb, Richard A.

    2001-05-01

    Kawasaki Disease is an inflammatory illness of young children that can seriously affect the cardiovascular system. The disease may cause coronary artery aneurysms, a thinning and dilation of the arterial wall when the wall is weakened by disease. Such aneurysms significantly increase the risk of rupture of the arterial wall, an event from which few patients survive. Due to the largely asymptotic nature of coronary aneurysms, diagnosis must be timely and accurate in order for treatment to be effective. Currently, aneurysms are detected primarily using X-ray angiography, MRI, and CT images. Increased insight into the disease and its effects on the arterial wall can be gained by multi-dimensional computerized visualization and quantitative analysis of diagnostic images made possible by the techniques of intravascular imaging and virtual endoscopy. Intravascular ultrasound images (IVUS) of a coronary artery exhibiting aneurysms were acquired from a patient with Kawasaki Disease. The disease is characterized by low luminescent in the IVUS images. Image segmentation of the abnormal, prominent anechoic regions branching from the lumen and originating within other layers of the arterial wall was performed and each region defined as a separate object. An object segmentation map was generated and used in perspective rendering of the original image volume set at successive locations along the length of the arterial segment, producing a 'fly-through' of the interior of the artery. The diseased region (aneurysm) of the wall was well defined by the differences in luminal size and by differences in appearance of the arterial wall shape observed during virtual angioscopic fly-throughs. Erosions of the endovascular surface caused pronounced horizontal and vertical ballooning of the lumen. Minute cracks within the unaffected luminal areas revealed possible early development of an aneurysm on the contralateral wall, originating in the medial section of the artery and spreading

  1. Three-dimensional virtual navigation versus conventional image guidance: A randomized controlled trial.

    PubMed

    Dixon, Benjamin J; Chan, Harley; Daly, Michael J; Qiu, Jimmy; Vescan, Allan; Witterick, Ian J; Irish, Jonathan C

    2016-07-01

    Providing image guidance in a 3-dimensional (3D) format, visually more in keeping with the operative field, could potentially reduce workload and lead to faster and more accurate navigation. We wished to assess a 3D virtual-view surgical navigation prototype in comparison to a traditional 2D system. Thirty-seven otolaryngology surgeons and trainees completed a randomized crossover navigation exercise on a cadaver model. Each subject identified three sinonasal landmarks with 3D virtual (3DV) image guidance and three landmarks with conventional cross-sectional computed tomography (CT) image guidance. Subjects were randomized with regard to which side and display type was tested initially. Accuracy, task completion time, and task workload were recorded. Display type did not influence accuracy (P > 0.2) or efficiency (P > 0.3) for any of the six landmarks investigated. Pooled landmark data revealed a trend of improved accuracy in the 3DV group by 0.44 millimeters (95% confidence interval [0.00-0.88]). High-volume surgeons were significantly faster (P < 0.01) and had reduced workload scores in all domains (P < 0.01), but they were no more accurate (P > 0.28). Real-time 3D image guidance did not influence accuracy, efficiency, or task workload when compared to conventional triplanar image guidance. The subtle pooled accuracy advantage for the 3DV view is unlikely to be of clinical significance. Experience level was strongly correlated to task completion time and workload but did not influence accuracy. N/A. Laryngoscope, 126:1510-1515, 2016. © 2016 The American Laryngological, Rhinological and Otological Society, Inc.

  2. A functional magnetic resonance imaging assessment of small animals' phobia using virtual reality as a stimulus.

    PubMed

    Clemente, Miriam; Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar

    2014-06-27

    To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals' phobia. The objective of our study was to evaluate the brain activations associated with small animals' phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives.

  3. Virtual Reality Model of the Three-Dimensional Anatomy of the Cavernous Sinus Based on a Cadaveric Image and Dissection.

    PubMed

    Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke

    2018-01-01

    Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.

  4. Maximizing Iodine Contrast-to-Noise Ratios in Abdominal CT Imaging through Use of Energy Domain Noise Reduction and Virtual Monoenergetic Dual-Energy CT.

    PubMed

    Leng, Shuai; Yu, Lifeng; Fletcher, Joel G; McCollough, Cynthia H

    2015-08-01

    To determine the iodine contrast-to-noise ratio (CNR) for abdominal computed tomography (CT) when using energy domain noise reduction and virtual monoenergetic dual-energy (DE) CT images and to compare the CNR to that attained with single-energy CT at 80, 100, 120, and 140 kV. This HIPAA-compliant study was approved by the institutional review board with waiver of informed consent. A syringe filled with diluted iodine contrast material was placed into 30-, 35-, and 45-cm-wide water phantoms and scanned with a dual-source CT scanner in both DE and single-energy modes with matched scanner output. Virtual monoenergetic images were generated, with energies ranging from 40 to 110 keV in 10-keV steps. A previously developed energy domain noise reduction algorithm was applied to reduce image noise by exploiting information redundancies in the energy domain. Image noise and iodine CNR were calculated. To show the potential clinical benefit of this technique, it was retrospectively applied to a clinical DE CT study of the liver in a 59-year-old male patient by using conventional and iterative reconstruction techniques. Image noise and CNR were compared for virtual monoenergetic images with and without energy domain noise reduction at each virtual monoenergetic energy (in kiloelectron volts) and phantom size by using a paired t test. CNR of virtual monoenergetic images was also compared with that of single-energy images acquired with 80, 100, 120, and 140 kV. Noise reduction of up to 59% (28.7 of 65.7) was achieved for DE virtual monoenergetic images by using an energy domain noise reduction technique. For the commercial virtual monoenergetic images, the maximum iodine CNR was achieved at 70 keV and was 18.6, 16.6, and 10.8 for the 30-, 35-, and 45-cm phantoms. After energy domain noise reduction, maximum iodine CNR was achieved at 40 keV and increased to 30.6, 25.4, and 16.5. These CNRs represented improvement of up to 64% (12.0 of 18.6) with the energy domain noise

  5. Virtual finger boosts three-dimensional imaging and microsurgery as well as terabyte volume image visualization and analysis.

    PubMed

    Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui

    2014-07-11

    Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.

  6. Recreation of three-dimensional objects in a real-time simulated environment by means of a panoramic single lens stereoscopic image-capturing device

    NASA Astrophysics Data System (ADS)

    Wong, Erwin

    2000-03-01

    Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.

  7. Designing a Virtual Item Bank Based on the Techniques of Image Processing

    ERIC Educational Resources Information Center

    Liao, Wen-Wei; Ho, Rong-Guey

    2011-01-01

    One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…

  8. Teens at Risk: Opposing Viewpoints. Opposing Viewpoints Series.

    ERIC Educational Resources Information Center

    Egendorf, Laura K., Ed.; Hurley, Jennifer A., Ed.

    Contributions in this collection present opposing viewpoints about factors that put teens at risk; illustrate how society can deal with teenage crime and violence; show how to prevent teen pregnancy; and present the roles of the media and government in teen substance abuse. The following essays are presented: (1) "A Variety of Factors Put Teens at…

  9. Overview of FTV (free-viewpoint television)

    NASA Astrophysics Data System (ADS)

    Tanimoto, Masayuki

    2010-07-01

    We have developed a new type of television named FTV (Free-viewpoint TV). FTV is the ultimate 3DTV that enables us to view a 3D scene by freely changing our viewpoints. We proposed the concept of FTV and constructed the world's first real-time system including the complete chain of operation from image capture to display. FTV is based on the rayspace method that represents one ray in real space with one point in the ray-space. We have developed ray capture, processing and display technologies for FTV. FTV can be carried out today in real time on a single PC or on a mobile player. We also realized FTV with free listening-point audio. The international standardization of FTV has been conducted in MPEG. The first phase of FTV was MVC (Multi-view Video Coding) and the second phase is 3DV (3D Video). MVC was completed in May 2009. The Blu-ray 3D specification has adopted MVC for compression. 3DV is a standard that targets serving a variety of 3D displays. The view generation function of FTV is used to decouple capture and display in 3DV. FDU (FTV Data Unit) is proposed as a data format for 3DV. FTU can compensate errors of the synthesized views caused by depth error.

  10. Assessing the mental frame syncing in the elderly: a virtual reality protocol.

    PubMed

    Serino, Silvia; Cipresso, Pietro; Gaggioli, Andrea; Riva, Giuseppe

    2014-01-01

    Decline in spatial memory in the elderly is often underestimated, and it is crucial to fully investigate the cognitive underpinnings of early spatial impairment. A virtual reality-based procedure was developed to assess deficit in the "mental frame syncing", namely the cognitive ability that allows an effective orientation by synchronizing the allocentric view-point independent representation with the allocentric view-point dependent representation. A pilot study was carried out to evaluate abilities in the mental frame syncing in a sample of 16 elderly participants. Preliminary results indicated that the general cognitive functioning was associated with the ability in the synchronization between these two allocentric references frames.

  11. WE-AB-BRA-12: Virtual Endoscope Tracking for Endoscopy-CT Image Registration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, W; Rao, A; Wendt, R

    Purpose: The use of endoscopy in radiotherapy will remain limited until we can register endoscopic video to CT using standard clinical equipment. In this phantom study we tested a registration method using virtual endoscopy to measure CT-space positions from endoscopic video. Methods: Our phantom is a contorted clay cylinder with 2-mm-diameter markers in the luminal surface. These markers are visible on both CT and endoscopic video. Virtual endoscope images were rendered from a polygonal mesh created by segmenting the phantom’s luminal surface on CT. We tested registration accuracy by tracking the endoscope’s 6-degree-of-freedom coordinates frame-to-frame in a video recorded asmore » it moved through the phantom, and using these coordinates to measure CT-space positions of markers visible in the final frame. To track the endoscope we used the Nelder-Mead method to search for coordinates that render the virtual frame most similar to the next recorded frame. We measured the endoscope’s initial-frame coordinates using a set of visible markers, and for image similarity we used a combination of mutual information and gradient alignment. CT-space marker positions were measured by projecting their final-frame pixel addresses through the virtual endoscope to intersect with the mesh. Registration error was quantified as the distance between this intersection and the marker’s manually-selected CT-space position. Results: Tracking succeeded for 6 of 8 videos, for which the mean registration error was 4.8±3.5mm (24 measurements total). The mean error in the axial direction (3.1±3.3mm) was larger than in the sagittal or coronal directions (2.0±2.3mm, 1.7±1.6mm). In the other 2 videos, the virtual endoscope got stuck in a false minimum. Conclusion: Our method can successfully track the position and orientation of an endoscope, and it provides accurate spatial mapping from endoscopic video to CT. This method will serve as a foundation for an endoscopy

  12. TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Myers, K; Bakic, P; Abbey, C

    2014-06-15

    This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less

  13. Anesthesiology training using 3D imaging and virtual reality

    NASA Astrophysics Data System (ADS)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  14. A Functional Magnetic Resonance Imaging Assessment of Small Animals’ Phobia Using Virtual Reality as a Stimulus

    PubMed Central

    Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar

    2014-01-01

    Background To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals’ phobia. Objective The objective of our study was to evaluate the brain activations associated with small animals’ phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. Methods We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. Results We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. Conclusions In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives. PMID:25654753

  15. A virtual phantom library for the quantification of deformable image registration uncertainties in patients with cancers of the head and neck.

    PubMed

    Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M

    2013-11-01

    Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual

  16. Free-viewpoint video of human actors using multiple handheld Kinects.

    PubMed

    Ye, Genzhi; Liu, Yebin; Deng, Yue; Hasler, Nils; Ji, Xiangyang; Dai, Qionghai; Theobalt, Christian

    2013-10-01

    We present an algorithm for creating free-viewpoint video of interacting humans using three handheld Kinect cameras. Our method reconstructs deforming surface geometry and temporal varying texture of humans through estimation of human poses and camera poses for every time step of the RGBZ video. Skeletal configurations and camera poses are found by solving a joint energy minimization problem, which optimizes the alignment of RGBZ data from all cameras, as well as the alignment of human shape templates to the Kinect data. The energy function is based on a combination of geometric correspondence finding, implicit scene segmentation, and correspondence finding using image features. Finally, texture recovery is achieved through jointly optimization on spatio-temporal RGB data using matrix completion. As opposed to previous methods, our algorithm succeeds on free-viewpoint video of human actors under general uncontrolled indoor scenes with potentially dynamic background, and it succeeds even if the cameras are moving.

  17. Virtual Reality simulator for dental anesthesia training in the inferior alveolar nerve block.

    PubMed

    Corrêa, Cléber Gimenez; Machado, Maria Aparecida de Andrade Moreira; Ranzini, Edith; Tori, Romero; Nunes, Fátima de Lourdes Santos

    2017-01-01

    This study shows the development and validation of a dental anesthesia-training simulator, specifically for the inferior alveolar nerve block (IANB). The system developed provides the tactile sensation of inserting a real needle in a human patient, using Virtual Reality (VR) techniques and a haptic device that can provide a perceived force feedback in the needle insertion task during the anesthesia procedure. To simulate a realistic anesthesia procedure, a Carpule syringe was coupled to a haptic device. The Volere method was used to elicit requirements from users in the Dentistry area; Repeated Measures Two-Way ANOVA (Analysis of Variance), Tukey post-hoc test and averages for the results' analysis. A questionnaire-based subjective evaluation method was applied to collect information about the simulator, and 26 people participated in the experiments (12 beginners, 12 at intermediate level, and 2 experts). The questionnaire included profile, preferences (number of viewpoints, texture of the objects, and haptic device handler), as well as visual (appearance, scale, and position of objects) and haptic aspects (motion space, tactile sensation, and motion reproduction). The visual aspect was considered appropriate and the haptic feedback must be improved, which the users can do by calibrating the virtual tissues' resistance. The evaluation of visual aspects was influenced by the participants' experience, according to ANOVA test (F=15.6, p=0.0002, with p<0.01). The user preferences were the simulator with two viewpoints, objects with texture based on images and the device with a syringe coupled to it. The simulation was considered thoroughly satisfactory for the anesthesia training, considering the needle insertion task, which includes the correct insertion point and depth, as well as the perception of tissues resistances during the insertion.

  18. Application of two segmentation protocols during the processing of virtual images in rapid prototyping: ex vivo study with human dry mandibles.

    PubMed

    Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida

    2013-12-01

    The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.

  19. Multi-viewpoint Coronal Mass Ejection Catalog Based on STEREO COR2 Observations

    NASA Astrophysics Data System (ADS)

    Vourlidas, Angelos; Balmaceda, Laura A.; Stenborg, Guillermo; Dal Lago, Alisson

    2017-04-01

    We present the first multi-viewpoint coronal mass ejection (CME) catalog. The events are identified visually in simultaneous total brightness observations from the twin SECCHI/COR2 coronagraphs on board the Solar Terrestrial Relations Observatory mission. The Multi-View CME Catalog differs from past catalogs in three key aspects: (1) all events between the two viewpoints are cross-linked, (2) each event is assigned a physics-motivated morphological classification (e.g., jet, wave, and flux rope), and (3) kinematic and geometric information is extracted semi-automatically via a supervised image segmentation algorithm. The database extends from the beginning of the COR2 synoptic program (2007 March) to the end of dual-viewpoint observations (2014 September). It contains 4473 unique events with 3358 events identified in both COR2s. Kinematic properties exist currently for 1747 events (26% of COR2-A events and 17% of COR2-B events). We examine several issues, made possible by this cross-linked CME database, including the role of projection on the perceived morphology of events, the missing CME rate, the existence of cool material in CMEs, the solar cycle dependence on CME rate, speeds and width, and the existence of flux rope within CMEs. We discuss the implications for past single-viewpoint studies and for Space Weather research. The database is publicly available on the web including all available measurements. We hope that it will become a useful resource for the community.

  20. Characterization of the biliary tract by virtual ultrasonography constructed by gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid-enhanced magnetic resonance imaging.

    PubMed

    Koizumi, Yohei; Hirooka, Masashi; Ochi, Hironori; Tokumoto, Yoshio; Takechi, Megumi; Hiraoka, Atsushi; Ikeda, Yoshio; Kumagi, Teru; Matsuura, Bunzo; Abe, Masanori; Hiasa, Yoichi

    2015-04-01

    This study aimed at prospectively evaluating bile duct anatomy on ultrasonography and evaluating the safety and utility of radiofrequency ablation (RFA) assisted by virtual ultrasonography from gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI). The institutional review board approved this study, and patients provided written informed consent prior to entry into the study. Bile duct anatomy was assessed in 201 patients who underwent Gd-EOB-DTPA-enhanced MRI for the evaluation of hepatic tumor. Eighty-one of these patients subsequently underwent RFA assisted by ultrasound imaging. In 23 patients, the tumor was located within 5 mm of the central bile duct, as demonstrated by MRI. Virtual ultrasonography constructed by Gd-EOB-enhanced MRI was able to visualize the common bile duct, left hepatic duct, and right hepatic duct in 96.5, 94.0, and 89.6 % of cases, respectively. The target hepatic tumor nodule and biliary duct could be detected with virtual ultrasonography in all patients, and no severe complications occurred. The running pattern of the bile ducts could be recognized on conventional ultrasound by referencing virtual ultrasonography constructed by Gd-EOB-DTPA-enhanced MRI. RFA assisted by this imaging strategy did not result in bile duct injury.

  1. The Virtual Dollhouse: Body Image and Weight Stigma in Second Life

    NASA Astrophysics Data System (ADS)

    Linares, R.; Bailenson, J.; Bailey, J.; Stevenson Won, A.

    2012-12-01

    Second Life is a virtual world where fantasy and reality collide as users can customize their digital representation or avatar. The act of wanting to ignore or avoid the real world's physical limitations can be called "avatar escapism" (Ducheneaut, Wen, Yee, Wadley, 2009). In the media the increasingly thin standard of beauty (Berel, Irving, 1998) has augmented negative stereotypes of overweight people to the point of making it acceptable for people to ridicule others' bodies image (Wang, Brownell, Wadden, 2004). In the real world, these concepts hurt people who are unable or unwilling to achieve an "acceptable" body size often leading them to be ridiculed. In the virtual world, a person may portray their desired body potentially escaping judgment from others. Can this more liberated form of bodily expression lead people to expect and need that perfection to a point where they abandon the real world in order to live in that perfection? With this knowledge we looked at the implications of the real world idolization of the perfect body and how this is transferred into the virtual space. In addition, we investigated how the reactions and behaviors that people have when others rebel against the "Barbie doll" appearance (Ducheneaut, Wen, Yee, Wadley, 2009) affect us in the real world.

  2. Incorporating virtual reality graphics with brain imaging for assessment of sport-related concussions.

    PubMed

    Slobounov, Semyon; Sebastianelli, Wayne; Newell, Karl M

    2011-01-01

    There is a growing concern that traditional neuropsychological (NP) testing tools are not sensitive to detecting residual brain dysfunctions in subjects suffering from mild traumatic brain injuries (MTBI). Moreover, most MTBI patients are asymptomatic based on anatomical brain imaging (CT, MRI), neurological examinations and patients' subjective reports within 10 days post-injury. Our ongoing research has documented that residual balance and visual-kinesthetic dysfunctions along with its underlying alterations of neural substrates may be detected in "asymptomatic subjects" by means of Virtual Reality (VR) graphics incorporated with brain imaging (EEG) techniques.

  3. Beyond seismic interferometry: imaging the earth's interior with virtual sources and receivers inside the earth

    NASA Astrophysics Data System (ADS)

    Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.

    2015-12-01

    Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.

  4. ACStor: Optimizing Access Performance of Virtual Disk Images in Clouds

    DOE PAGES

    Wu, Song; Wang, Yihong; Luo, Wei; ...

    2017-03-02

    In virtualized data centers, virtual disk images (VDIs) serve as the containers in virtual environment, so their access performance is critical for the overall system performance. Some distributed VDI chunk storage systems have been proposed in order to alleviate the I/O bottleneck for VM management. As the system scales up to a large number of running VMs, however, the overall network traffic would become unbalanced with hot spots on some VMs inevitably, leading to I/O performance degradation when accessing the VMs. Here, we propose an adaptive and collaborative VDI storage system (ACStor) to resolve the above performance issue. In comparisonmore » with the existing research, our solution is able to dynamically balance the traffic workloads in accessing VDI chunks, based on the run-time network state. Specifically, compute nodes with lightly loaded traffic will be adaptively assigned more chunk access requests from remote VMs and vice versa, which can effectively eliminate the above problem and thus improves the I/O performance of VMs. We also implement a prototype based on our ACStor design, and evaluate it by various benchmarks on a real cluster with 32 nodes and a simulated platform with 256 nodes. Experiments show that under different network traffic patterns of data centers, our solution achieves up to 2-8 performance gain on VM booting time and VM’s I/O throughput, in comparison with the other state-of-the-art approaches.« less

  5. From experimental imaging techniques to virtual embryology.

    PubMed

    Weninger, Wolfgang J; Tassy, Olivier; Darras, Sébastien; Geyer, Stefan H; Thieffry, Denis

    2004-01-01

    Modern embryology increasingly relies on descriptive and functional three dimensional (3D) and four dimensional (4D) analysis of physically, optically, or virtually sectioned specimens. To cope with the technical requirements, new methods for high detailed in vivo imaging, as well as the generation of high resolution digital volume data sets for the accurate visualisation of transgene activity and gene product presence, in the context of embryo morphology, were recently developed and are under construction. These methods profoundly change the scientific applicability, appearance and style of modern embryo representations. In this paper, we present an overview of the emerging techniques to create, visualise and administrate embryo representations (databases, digital data sets, 3-4D embryo reconstructions, models, etc.), and discuss the implications of these new methods on the work of modern embryologists, including, research, teaching, the selection of specific model organisms, and potential collaborators.

  6. Virtually distortion-free imaging system for large field, high resolution lithography using electrons, ions or other particle beams

    DOEpatents

    Hawryluk, A.M.; Ceglio, N.M.

    1993-01-12

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.

  7. Virtually distortion-free imaging system for large field, high resolution lithography using electrons, ions or other particle beams

    DOEpatents

    Hawryluk, Andrew M.; Ceglio, Natale M.

    1993-01-01

    Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.

  8. A system for the registration of arthroscopic images to magnetic resonance images of the knee: for improved virtual knee arthroscopy

    NASA Astrophysics Data System (ADS)

    Hu, Chengliang; Amati, Giancarlo; Gullick, Nicola; Oakley, Stephen; Hurmusiadis, Vassilios; Schaeffter, Tobias; Penney, Graeme; Rhode, Kawal

    2009-02-01

    Knee arthroscopy is a minimally invasive procedure that is routinely carried out for the diagnosis and treatment of pathologies of the knee joint. A high level of expertise is required to carry out this procedure and therefore the clinical training is extensive. There are several reasons for this that include the small field of view seen by the arthroscope and the high degree of distortion in the video images. Several virtual arthroscopy simulators have been proposed to augment the learning process. One of the limitations of these simulators is the generic models that are used. We propose to develop a new virtual arthroscopy simulator that will allow the use of pathology-specific models with an increased level of photo-realism. In order to generate these models we propose to use registered magnetic resonance images (MRI) and arthroscopic video images collected from patients with a variety of knee pathologies. We present a method to perform this registration based on the use of a combined X-ray and MR imaging system (XMR). In order to validate our technique we carried out MR imaging and arthroscopy of a custom-made acrylic phantom in the XMR environment. The registration between the two modalities was computed using a combination of XMR and camera calibration, and optical tracking. Both two-dimensional (2D) and three-dimensional (3D) registration errors were computed and shown to be approximately 0.8 and 3 mm, respectively. Further to this, we qualitatively tested our approach using a more realistic plastic knee model that is used for the arthroscopy training.

  9. Signal enhancement in optical projection tomography via virtual high dynamic range imaging of single exposure

    NASA Astrophysics Data System (ADS)

    Yang, Yujie; Dong, Di; Shi, Liangliang; Wang, Jun; Yang, Xin; Tian, Jie

    2015-03-01

    Optical projection tomography (OPT) is a mesoscopic scale optical imaging technique for specimens between 1mm and 10mm. OPT has been proven to be immensely useful in a wide variety of biological applications, such as developmental biology and pathology, but its shortcomings in imaging specimens containing widely differing contrast elements are obvious. The longer exposure for high intensity tissues may lead to over saturation of other areas, whereas a relatively short exposure may cause similarity with surrounding background. In this paper, we propose an approach to make a trade-off between capturing weak signals and revealing more details for OPT imaging. This approach consists of three steps. Firstly, the specimens are merely scanned in 360 degrees above a normal exposure but non-overexposure to acquire the projection data. This reduces the photo bleaching and pre-registration computation compared with multiple different exposures in conventional high dynamic range (HDR) imaging method. Secondly, three virtual channels are produced for each projection image based on the histogram distribution to simulate the low, normal and high exposure images used in the traditional HDR technology in photography. Finally, each virtual channel is normalized to the full gray scale range and three channels are recombined into one image using weighting coefficients optimized by a standard eigen-decomposition method. After applying our approach on the projection data, filtered back projection (FBP) algorithm is carried out for 3-dimentional reconstruction. The neonatal wild-type mouse paw has been scanned to verify this approach. Results demonstrated the effectiveness of the proposed approach.

  10. Virtual landmarks

    NASA Astrophysics Data System (ADS)

    Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.

    2017-03-01

    Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.

  11. Innovative application of virtual display technique in virtual museum

    NASA Astrophysics Data System (ADS)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  12. Time multiplexing for increased FOV and resolution in virtual reality

    NASA Astrophysics Data System (ADS)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  13. Technical Note: Relation between dual-energy subtraction of CT images for electron density calibration and virtual monochromatic imaging.

    PubMed

    Saito, Masatoshi

    2015-07-01

    For accurate tissue inhomogeneity correction in radiotherapy treatment planning, the author previously proposed a simple conversion of the energy-subtracted computed tomography (CT) number to an electron density (ΔHU-ρe conversion), which provides a single linear relationship between ΔHU and ρe over a wide ρe range. The purpose of the present study was to reveal the relation between the ΔHU image for ρe calibration and a virtually monochromatic CT image by performing numerical analyses based on the basis material decomposition in dual-energy CT. The author determined the weighting factor, α0, of the ΔHU-ρe conversion through numerical analyses of the International Commission on Radiation Units and Measurements Report-46 human body tissues using their attenuation coefficients and given ρe values. Another weighting factor, α(E), for synthesizing a virtual monochromatic CT image from high- and low-kV CT images, was also calculated in the energy range of 0.03 < E < 5 MeV, assuming that cortical bone and water were the basis materials. The mass attenuation coefficients for these materials were obtained using the xcom photon cross sections database. The effective x-ray energies used to calculate the attenuation were chosen to imitate a dual-source CT scanner operated at 80-140 and 100-140 kV/Sn. The determined α0 values were 0.455 for 80-140 kV/Sn and 0.743 for 100-140 kV/Sn. These values coincided almost perfectly with the respective maximal points of the calculated α(E) curves located at approximately 1 MeV, in which the photon-matter interaction in human body tissues is exclusively the incoherent (Compton) scattering. The ΔHU image could be regarded substantially as a CT image acquired with monoenergetic 1-MeV photons, which provides a linear relationship between CT numbers and electron densities.

  14. Acute effects of alcohol on intrusive memory development and viewpoint dependence in spatial memory support a dual representation model.

    PubMed

    Bisby, James A; King, John A; Brewin, Chris R; Burgess, Neil; Curran, H Valerie

    2010-08-01

    A dual representation model of intrusive memory proposes that personally experienced events give rise to two types of representation: an image-based, egocentric representation based on sensory-perceptual features; and a more abstract, allocentric representation that incorporates spatiotemporal context. The model proposes that intrusions reflect involuntary reactivation of egocentric representations in the absence of a corresponding allocentric representation. We tested the model by investigating the effect of alcohol on intrusive memories and, concurrently, on egocentric and allocentric spatial memory. With a double-blind independent group design participants were administered alcohol (.4 or .8 g/kg) or placebo. A virtual environment was used to present objects and test recognition memory from the same viewpoint as presentation (tapping egocentric memory) or a shifted viewpoint (tapping allocentric memory). Participants were also exposed to a trauma video and required to detail intrusive memories for 7 days, after which explicit memory was assessed. There was a selective impairment of shifted-view recognition after the low dose of alcohol, whereas the high dose induced a global impairment in same-view and shifted-view conditions. Alcohol showed a dose-dependent inverted "U"-shaped effect on intrusions, with only the low dose increasing the number of intrusions, replicating previous work. When same-view recognition was intact, decrements in shifted-view recognition were associated with increases in intrusions. The differential effect of alcohol on intrusive memories and on same/shifted-view recognition support a dual representation model in which intrusions might reflect an imbalance between two types of memory representation. These findings highlight important clinical implications, given alcohol's involvement in real-life trauma. Copyright 2010 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  15. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  16. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    NASA Technical Reports Server (NTRS)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  17. Fully Three-Dimensional Virtual-Reality System

    NASA Technical Reports Server (NTRS)

    Beckman, Brian C.

    1994-01-01

    Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.

  18. A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image

    PubMed Central

    Guo, Chengyu; Ruan, Songsong; Liang, Xiaohui; Zhao, Qinping

    2016-01-01

    Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. PMID:26907289

  19. A phantom study of the immobilization and the indications for using virtual isocenter in stereoscopic X‐ray image guidance system referring to position localizer in frameless radiosurgery

    PubMed Central

    Chang, Hsiao‐Han; Lee, Hsiao‐Fei; Sung, Chien‐Cheng; Liao, Tsung‐I

    2013-01-01

    A frameless radiosurgery system is using a set of thermoplastic mask for fixation and stereoscopic X‐ray imaging for alignment. The accuracy depends on mask fixation and imaging. Under certain circumstances, the guidance images may contain insufficient bony structures, resulting in lesser accuracy. A virtual isocenter function is designed for such scenarios. In this study, we investigated the immobilization and the indications for using virtual isocenter. Twenty‐four arbitrary imaginary treatment targets (ITTs) in phantom were evaluated. The external Localizer with positioner films was used as reference. The alignments by using actual and virtual isocenter in image guidance were compared. The deviation of the alignment after mask removing and then resetting was also checked. The results illustrated that the mean deviation between the alignment by image guidance using actual isocenter (Isoimg) and the localizer(Isoloc) was 2.26mm±1.16mm (standard deviation, SD), 1.66mm±0.83mm for using virtual isocenter. The deviation of the alignment by the image guidance using actual isocenter to the localizer before and after mask resetting was 7.02mm±5.8mm. The deviations before and after mask resetting were insignificant for the target center from skull edge larger than 80 mm on craniocaudal direction. The deviations between the alignment using actual and virtual isocenter in image guidance were not significant if the minimum distance from target center to skull edge was larger or equal to 30 mm. Due to an unacceptable deviation after mask resetting, the image guidance is necessary to improve the accuracy of frameless immobilization. A treatment isocenter less than 30 mm from the skull bone should be an indication for using virtual isocenter to align in image guidance. The virtual isocenter should be set as caudally as possible, and the sella of skull should be the ideal point. PACS numbers: 87.55.kh, 87.55.ne, 87.55.tm PMID:23835379

  20. Spectral Reconstruction for Obtaining Virtual Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Castro, E. C.

    2016-12-01

    Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.

  1. Developing students’ ideas about lens imaging: teaching experiments with an image-based approach

    NASA Astrophysics Data System (ADS)

    Grusche, Sascha

    2017-07-01

    Lens imaging is a classic topic in physics education. To guide students from their holistic viewpoint to the scientists’ analytic viewpoint, an image-based approach to lens imaging has recently been proposed. To study the effect of the image-based approach on undergraduate students’ ideas, teaching experiments are performed and evaluated using qualitative content analysis. Some of the students’ ideas have not been reported before, namely those related to blurry lens images, and those developed by the proposed teaching approach. To describe learning pathways systematically, a conception-versus-time coordinate system is introduced, specifying how teaching actions help students advance toward a scientific understanding.

  2. Viewpoint invariance in the discrimination of upright and inverted faces

    PubMed Central

    Wright, Alissa; Barton, Jason JS

    2008-01-01

    Current models of face processing support an orientation-dependent expert face processing mechanism. However, even when upright, faces are encountered from different viewpoints, across which a face processing system must be able to generalize. Different computational models have generated competing predictions of how viewpoint variation might affect the perception of upright versus inverted faces. Our goal was to examine the interaction between viewpoint variation and orientation on face discrimination. Sixteen normal subjects performed an oddity-paradigm requiring subjects to discriminate changes in three simultaneously viewed morphed faces presented either upright or inverted. In one type of trial all the faces were seen in frontal view, in the other all faces varied in viewpoint, rotated 45° from each other. After the effects of orientation were adjusted for perceptual difficulty, there were only main effects of orientation and viewpoint, with no interaction between orientation and viewpoint. We conclude that the effects of viewpoint variation on the perceptual discrimination of faces is not different for upright versus inverted faces, indicating that its effects are independent of the expertise that exists for upright faces. PMID:18804486

  3. Factors to keep in mind when introducing virtual microscopy.

    PubMed

    Glatz-Krieger, Katharina; Spornitz, Udo; Spatz, Alain; Mihatsch, Michael J; Glatz, Dieter

    2006-03-01

    Digitization of glass slides and delivery of so-called virtual slides (VS) emulating a real microscope over the Internet have become reality due to recent improvements in technology. We have implemented a virtual microscope for instruction of medical students and for continuing medical education. Up to 30,000 images per slide are captured using a microscope with an automated stage. The images are post-processed and then served by a plain hypertext transfer protocol (http)-server. A virtual slide client (vMic) based on Macromedia's Flash MX, a highly accepted technology available on every modern Web browser, has been developed. All necessary virtual slide parameters are stored in an XML file together with the image. Evaluation of the courses by questionnaire indicated that most students and many but not all pathologists regard virtual slides as an adequate replacement for traditional slides. All our virtual slides are publicly accessible over the World Wide Web (WWW) at http://vmic.unibas.ch . Recently, several commercially available virtual slide acquisition systems (VSAS) have been developed that use various technologies to acquire and distribute virtual slides. These systems differ in speed, image quality, compatibility, viewer functionalities and price. This paper gives an overview of the factors to keep in mind when introducing virtual microscopy.

  4. 3-D Imaging In Virtual Environment: A Scientific Clinical and Teaching Tool

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.; DeVincenzi, Donald L. (Technical Monitor)

    1996-01-01

    The advent of powerful graphics workstations and computers has led to the advancement of scientific knowledge through three-dimensional (3-D) reconstruction and imaging of biological cells and tissues. The Biocomputation Center at NASA Ames Research Center pioneered the effort to produce an entirely computerized method for reconstruction of objects from serial sections studied in a transmission electron microscope (TEM). The software developed, ROSS (Reconstruction of Serial Sections), is now being distributed to users across the United States through Space Act Agreements. The software is in widely disparate fields such as geology, botany, biology and medicine. In the Biocomputation Center, ROSS serves as the basis for development of virtual environment technologies for scientific and medical use. This report will describe the Virtual Surgery Workstation Project that is ongoing with clinicians at Stanford University Medical Center, and the role of the Visible Human data in the project.

  5. Convolutional virtual electric field for image segmentation using active contours.

    PubMed

    Wang, Yuanquan; Zhu, Ce; Zhang, Jiawan; Jian, Yuden

    2014-01-01

    Gradient vector flow (GVF) is an effective external force for active contours; however, it suffers from heavy computation load. The virtual electric field (VEF) model, which can be implemented in real time using fast Fourier transform (FFT), has been proposed later as a remedy for the GVF model. In this work, we present an extension of the VEF model, which is referred to as CONvolutional Virtual Electric Field, CONVEF for short. This proposed CONVEF model takes the VEF model as a convolution operation and employs a modified distance in the convolution kernel. The CONVEF model is also closely related to the vector field convolution (VFC) model. Compared with the GVF, VEF and VFC models, the CONVEF model possesses not only some desirable properties of these models, such as enlarged capture range, u-shape concavity convergence, subject contour convergence and initialization insensitivity, but also some other interesting properties such as G-shape concavity convergence, neighboring objects separation, and noise suppression and simultaneously weak edge preserving. Meanwhile, the CONVEF model can also be implemented in real-time by using FFT. Experimental results illustrate these advantages of the CONVEF model on both synthetic and natural images.

  6. Comparison of different methods for gender estimation from face image of various poses

    NASA Astrophysics Data System (ADS)

    Ishii, Yohei; Hongo, Hitoshi; Niwa, Yoshinori; Yamamoto, Kazuhiko

    2003-04-01

    Recently, gender estimation from face images has been studied for frontal facial images. However, it is difficult to obtain such facial images constantly in the case of application systems for security, surveillance and marketing research. In order to build such systems, a method is required to estimate gender from the image of various facial poses. In this paper, three different classifiers are compared in appearance-based gender estimation, which use four directional features (FDF). The classifiers are linear discriminant analysis (LDA), Support Vector Machines (SVMs) and Sparse Network of Winnows (SNoW). Face images used for experiments were obtained from 35 viewpoints. The direction of viewpoints varied +/-45 degrees horizontally, +/-30 degrees vertically at 15 degree intervals respectively. Although LDA showed the best performance for frontal facial images, SVM with Gaussian kernel was found the best performance (86.0%) for the facial images of 35 viewpoints. It is considered that SVM with Gaussian kernel is robust to changes in viewpoint when estimating gender from these results. Furthermore, the estimation rate was quite close to the average estimation rate at 35 viewpoints respectively. It is supposed that the methods are reasonable to estimate gender within the range of experimented viewpoints by learning face images from multiple directions by one class.

  7. Two methods of Haustral fold detection from computed tomographic virtual colonoscopy images

    NASA Astrophysics Data System (ADS)

    Chowdhury, Ananda S.; Tan, Sovira; Yao, Jianhua; Linguraru, Marius G.; Summers, Ronald M.

    2009-02-01

    Virtual colonoscopy (VC) has gained popularity as a new colon diagnostic method over the last decade. VC is a new, less invasive alternative to the usually practiced optical colonoscopy for colorectal polyp and cancer screening, the second major cause of cancer related deaths in industrial nations. Haustral (colonic) folds serve as important landmarks for virtual endoscopic navigation in the existing computer-aided-diagnosis (CAD) system. In this paper, we propose and compare two different methods of haustral fold detection from volumetric computed tomographic virtual colonoscopy images. The colon lumen is segmented from the input using modified region growing and fuzzy connectedness. The first method for fold detection uses a level set that evolves on a mesh representation of the colon surface. The colon surface is obtained from the segmented colon lumen using the Marching Cubes algorithm. The second method for fold detection, based on a combination of heat diffusion and fuzzy c-means algorithm, is employed on the segmented colon volume. Folds obtained on the colon volume using this method are then transferred to the corresponding colon surface. After experimentation with different datasets, results are found to be promising. The results also demonstrate that the first method has a tendency of slight under-segmentation while the second method tends to slightly over-segment the folds.

  8. Virtual and super - virtual refraction method: Application to synthetic data and 2012 of Karangsambung survey data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nugraha, Andri Dian; Adisatrio, Philipus Ronnie

    2013-09-09

    Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less

  9. Hundred metre virtual telescope captures unique detailed colour image

    NASA Astrophysics Data System (ADS)

    2009-02-01

    A team of French astronomers has captured one of the sharpest colour images ever made. They observed the star T Leporis, which appears, on the sky, as small as a two-storey house on the Moon [1]. The image was taken with ESO's Very Large Telescope Interferometer (VLTI), emulating a virtual telescope about 100 metres across and reveals a spherical molecular shell around an aged star. ESO PR Photo 06a/09 The star T Leporis as seen with VLTI ESO PR Photo 06b/09 The star T Leporis to scale ESO PR Photo 06c/09 A virtual 100-metre telescope ESO PR Photo 06d/09 The orbit of Theta1 Orionis C ESO PR Video 06a/09 Zoom-in onto T Leporis "This is one of the first images made using near-infrared interferometry," says lead author Jean-Baptiste Le Bouquin. Interferometry is a technique that combines the light from several telescopes, resulting in a vision as sharp as that of a giant telescope with a diameter equal to the largest separation between the telescopes used. Achieving this requires the VLTI system components to be positioned to an accuracy of a fraction of a micrometre over about 100 metres and maintained so throughout the observations -- a formidable technical challenge. When doing interferometry, astronomers must often content themselves with fringes, the characteristic pattern of dark and bright lines produced when two beams of light combine, from which they can model the physical properties of the object studied. But, if an object is observed on several runs with different combinations and configurations of telescopes, it is possible to put these results together to reconstruct an image of the object. This is what has now been done with ESO's VLTI, using the 1.8-metre Auxiliary Telescopes. "We were able to construct an amazing image, and reveal the onion-like structure of the atmosphere of a giant star at a late stage of its life for the first time," says Antoine Mérand, member of the team. "Numerical models and indirect data have allowed us to imagine the

  10. The use of virtual fiducials in image-guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Glisson, Courtenay; Ong, Rowena; Simpson, Amber; Clark, Peter; Herrell, S. D.; Galloway, Robert

    2011-03-01

    The alignment of image-space to physical-space lies at the heart of all image-guided procedures. In intracranial surgery, point-based registrations can be used with either skin-affixed or bone-implanted extrinsic objects called fiducial markers. The advantages of point-based registration techniques are that they are robust, fast, and have a well developed mathematical foundation for the assessment of registration quality. In abdominal image-guided procedures such techniques have not been successful. It is difficult to accurately locate sufficient homologous intrinsic points in imagespace and physical-space, and the implantation of extrinsic fiducial markers would constitute "surgery before the surgery." Image-space to physical-space registration for abdominal organs has therefore been dominated by surfacebased registration techniques which are iterative, prone to local minima, sensitive to initial pose, and sensitive to percentage coverage of the physical surface. In our work in image-guided kidney surgery we have developed a composite approach using "virtual fiducials." In an open kidney surgery, the perirenal fat is removed and the surface of the kidney is dotted using a surgical marker. A laser range scanner (LRS) is used to obtain a surface representation and matching high definition photograph. A surface to surface registration is performed using a modified iterative closest point (ICP) algorithm. The dots are extracted from the high definition image and assigned the three dimensional values from the LRS pixels over which they lie. As the surgery proceeds, we can then use point-based registrations to re-register the spaces and track deformations due to vascular clamping and surgical tractions.

  11. Using Technology to Improve Student Learning. NCREL Viewpoints, Volume 12

    ERIC Educational Resources Information Center

    Gahala, Jan, Ed.

    2004-01-01

    "Viewpoints" is a multimedia package containing two audio CDs and a short, informative booklet. This volume of "Viewpoints" focuses on how technology can help improve student learning. The audio CDs provide the voices, or viewpoints, of various leaders from the education field who work closely with technology issues. Their…

  12. Design and Implementation of a Self-Directed Stereochemistry Lesson Using Embedded Virtual Three-Dimensional Images in a Portable Document Format

    ERIC Educational Resources Information Center

    Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.

    2012-01-01

    A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…

  13. Real-time functional magnetic imaging-brain-computer interface and virtual reality promising tools for the treatment of pedophilia.

    PubMed

    Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels

    2011-01-01

    This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Information security system based on virtual-optics imaging methodology and public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong

    In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.

  15. Integrated VR platform for 3D and image-based models: a step toward interactive image-based virtual environments

    NASA Astrophysics Data System (ADS)

    Yoon, Jayoung; Kim, Gerard J.

    2003-04-01

    Traditionally, three dimension models have been used for building virtual worlds, and a data structure called the "scene graph" is often employed to organize these 3D objects in the virtual space. On the other hand, image-based rendering has recently been suggested as a probable alternative VR platform for its photo-realism, however, due to limited interactivity, it has only been used for simple navigation systems. To combine the merits of these two approaches to object/scene representations, this paper proposes for a scene graph structure in which both 3D models and various image-based scenes/objects can be defined, traversed, and rendered together. In fact, as suggested by Shade et al., these different representations can be used as different LOD's for a given object. For instance, an object might be rendered using a 3D model at close range, a billboard at an intermediate range, and as part of an environment map at far range. The ultimate objective of this mixed platform is to breath more interactivity into the image based rendered VE's by employing 3D models as well. There are several technical challenges in devising such a platform: designing scene graph nodes for various types of image based techniques, establishing criteria for LOD/representation selection, handling their transitions, implementing appropriate interaction schemes, and correctly rendering the overall scene. Currently, we have extended the scene graph structure of the Sense8's WorldToolKit, to accommodate new node types for environment maps billboards, moving textures and sprites, "Tour-into-the-Picture" structure, and view interpolated objects. As for choosing the right LOD level, the usual viewing distance and image space criteria are used, however, the switching between the image and 3D model occurs at a distance from the user where the user starts to perceive the object's internal depth. Also, during interaction, regardless of the viewing distance, a 3D representation would be used, it if

  16. Two dimensional imaging of the virtual source of a supersonic beam: helium at 125 K.

    PubMed

    Eder, S D; Bracco, G; Kaltenbacher, T; Holst, B

    2014-01-09

    Here we present the first two-dimensional images of the virtual source of a supersonic helium expansion. The images were obtained using a free-standing Fresnel zone plate with an outermost zone width of 50 nm as imaging lens and a beam cooled to around 125 K. The nozzle diameter was 10 μm. The virtual source diameter was found to increase with stagnation pressure from 140 ± 30 μm at po = 21 bar up to 270 ± 25 μm at po = 101 bar. The experimental results are compared to a theoretical model based on the solution of the Boltzmann equation by the method of moments. The quantum mechanical cross sections used in the model have been calculated for the Lennard-Jones (LJ) and the Hurly-Moldover (HM) potentials. By using a scaling of the perpendicular temperature that parametrizes the perpendicular velocity distribution based on a continuum expansion approach, the LJ potential shows a good overall agreement with the experiment. However, at higher pressures the data points lie in between the two theoretical curves and the slope of the trend is more similar to the HM curve. Real gas corrections to enthalpy are considered but they affect the results less than the experimental errors.

  17. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception

    PubMed Central

    Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-01-01

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. SIGNIFICANCE STATEMENT Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using

  18. Applications of virtual reality technology in pathology.

    PubMed

    Grimes, G J; McClellan, S A; Goldman, J; Vaughn, G L; Conner, D A; Kujawski, E; McDonald, J; Winokur, T; Fleming, W

    1997-01-01

    TelePath(SM) a telerobotic system utilizing virtual microscope concepts based on high quality still digital imaging and aimed at real-time support for surgery by remote diagnosis of frozen sections. Many hospitals and clinics have an application for the remote practice of pathology, particularly in the area of reading frozen sections in support of surgery, commonly called anatomic pathology. The goal is to project the expertise of the pathologist into the remote setting by giving the pathologist access to the microscope slides with an image quality and human interface comparable to what the pathologist would experience at a real rather than a virtual microscope. A working prototype of a virtual microscope has been defined and constructed which has the needed performance in both the image quality and human interface areas for a pathologist to work remotely. This is accomplished through the use of telerobotics and an image quality which provides the virtual microscope the same diagnostic capabilities as a real microscope. The examination of frozen sections is performed a two-dimensional world. The remote pathologist is in a virtual world with the same capabilities as a "real" microscope, but response times may be slower depending on the specific computing and telecommunication environments. The TelePath system has capabilities far beyond a normal biological microscope, such as the ability to create a low power image of the entire sample using multiple images digitally matched together; the ability to digitally retrace a viewing trajectory; and the ability to archive images using CD ROM and other mass storage devices.

  19. Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective

    PubMed Central

    Pyers, Jennie E.; Perniss, Pamela; Emmorey, Karen

    2015-01-01

    Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality. PMID:26981027

  20. Viewpoint in the Visual-Spatial Modality: The Coordination of Spatial Perspective.

    PubMed

    Pyers, Jennie E; Perniss, Pamela; Emmorey, Karen

    2015-06-01

    Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.

  1. The Use of Virtual Reality Simulation to Improve Technical Skill in the Undergraduate Medical Imaging Student

    ERIC Educational Resources Information Center

    Gunn, Therese; Jones, Lee; Bridge, Pete; Rowntree, Pam; Nissen, Lisa

    2018-01-01

    In recent years, simulation has increasingly underpinned the acquisition of pre-clinical skills by undergraduate medical imaging (diagnostic radiography) students. This project aimed to evaluate the impact of an innovative virtual reality (VR) learning environment on the development of technical proficiency by students. The study assessed the…

  2. Generating virtual training samples for sparse representation of face images and face recognition

    NASA Astrophysics Data System (ADS)

    Du, Yong; Wang, Yu

    2016-03-01

    There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.

  3. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  4. Virtual MR arthroscopy of the shoulder: image gallery with arthroscopic correlation of major pathologies in shoulder instability.

    PubMed

    Stecco, A; Volpe, D; Volpe, N; Fornara, P; Castagna, A; Carriero, A

    2008-12-01

    The purpose of this study was to compare virtual MR arthroscopic reconstructions with arthroscopic images in patients affected by shoulder joint instability. MR arthrography (MR-AR) of the shoulder is now a well-assessed technique, based on the injection of a contrast medium solution, which fills the articular space and finds its way between the rotator cuff (RC) and the glenohumeral ligaments. In patients with glenolabral pathology, we used an additional sequence that provided virtual arthroscopy (VA) post-processed views, which completed the MR evaluation of shoulder pathology. We enrolled 36 patients, from whom MR arthrographic sequence data (SE T1w and GRE T1 FAT SAT) were obtained using a GE 0.5 T Signa--before any surgical or arthroscopic planned treatment; the protocol included a supplemental 3D, spoiled GE T1w positioned in the coronal plane. Dedicated software loaded on a work-station was used to elaborate VAs. Two radiologists evaluated, on a semiquantitative scale, the visibility of the principal anatomic structures, and then, in consensus, the pathology emerging from the VA images. These images were reconstructed in all patients, except one. The visualization of all anatomical structures was acceptable. VA and MR arthrographic images were fairly concordant with intraoperative findings. Although in our pilot study the VA findings did not change the surgical planning, the results showed concordance with the surgical or arthroscopic images.

  5. Should laptops be allowed in the classroom? Two viewpoints: viewpoint 1: laptops in classrooms facilitate curricular advancement and promote student learning and viewpoint 2: deconstructing and rethinking the use of laptops in the classroom.

    PubMed

    Spallek, Heiko; von Bergmann, HsingChi

    2014-12-01

    This Point/Counterpoint article discusses the pros and cons of deploying one aspect of instructional technology in dental education: the use of laptops in the classroom. Two opposing viewpoints, written by different authors, evaluate the arguments. Viewpoint 1 argues that laptops in classrooms can be a catalyst for rapid curricular advancement and prepare dental graduates for the digital age of dentistry. As dental education is not limited to textual information, but includes skill development in spatial relationships and hands-on training, technology can play a transformative role in students' learning. Carefully implemented instructional technology can enhance student motivation when it transforms students from being the objects of teaching to the subjects of learning. Ubiquitous access to educational material allows for just-in-time learning and can overcome organizational barriers when, for instance, introducing interprofessional education. Viewpoint 2 argues that, in spite of widespread agreement that instructional technology leads to curricular innovation, the notion of the use of laptops in classrooms needs to be deconstructed and rethought when effective learning outcomes are sought. Analyzing the purpose, pedagogy, and learning product while applying lessons learned from K-12 implementation leads to a more complex picture of laptop integration in dental classrooms and forms the basis for questioning the value of such usage. For laptop use to contribute to student learning, rather than simply providing opportunity for students to take notes and access the Internet during class, this viewpoint emphasizes that dental educators need to think carefully about the purpose of this technology and to develop appropriate pedagogical strategies to achieve their objectives. The two viewpoints agree that significant faculty development efforts should precede any introduction of technology into the educational process and that technology alone cannot change education

  6. An interactive, stereoscopic virtual environment for medical imaging visualization, simulation and training

    NASA Astrophysics Data System (ADS)

    Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel

    2017-03-01

    Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic

  7. Intraoperative virtual brain counseling

    NASA Astrophysics Data System (ADS)

    Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando

    1997-06-01

    Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.

  8. Accuracy of Dual-Energy Virtual Monochromatic CT Numbers: Comparison between the Single-Source Projection-Based and Dual-Source Image-Based Methods.

    PubMed

    Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko

    2018-03-21

    To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  9. Energy-Specific Optimization of Attenuation Thresholds for Low-Energy Virtual Monoenergetic Images in Renal Lesion Evaluation.

    PubMed

    Patel, Bhavik N; Farjat, Alfredo; Schabel, Christoph; Duvnjak, Petar; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Marin, Daniele

    2018-05-01

    The purpose of this study was to determine in vitro and in vivo the optimal threshold for renal lesion vascularity at low-energy (40-60 keV) virtual monoenergetic imaging. A rod simulating unenhanced renal parenchymal attenuation (35 HU) was fitted with a syringe containing water. Three iodinated solutions (0.38, 0.57, and 0.76 mg I/mL) were inserted into another rod that simulated enhanced renal parenchyma (180 HU). Rods were inserted into cylindric phantoms of three different body sizes and scanned with single- and dual-energy MDCT. In addition, 102 patients (32 men, 70 women; mean age, 66.8 ± 12.9 [SD] years) with 112 renal lesions (67 nonvascular, 45 vascular) measuring 1.1-8.9 cm underwent single-energy unenhanced and contrast-enhanced dual-energy CT. Optimal threshold attenuation values that differentiated vascular from nonvascular lesions at 40-60 keV were determined. Mean optimal threshold values were 30.2 ± 3.6 (standard error), 20.9 ± 1.3, and 16.1 ± 1.0 HU in the phantom, and 35.9 ± 3.6, 25.4 ± 1.8, and 17.8 ± 1.8 HU in the patients at 40, 50, and 60 keV. Sensitivity and specificity for the thresholds did not change significantly between low-energy and 70-keV virtual monoenergetic imaging (sensitivity, 87-98%; specificity, 90-91%). The AUC from 40 to 70 keV was 0.96 (95% CI, 0.93-0.99) to 0.98 (95% CI, 0.95-1.00). Low-energy virtual monoenergetic imaging at energy-specific optimized attenuation thresholds can be used for reliable characterization of renal lesions.

  10. The Occipital Face Area Is Causally Involved in Facial Viewpoint Perception.

    PubMed

    Kietzmann, Tim C; Poltoratski, Sonia; König, Peter; Blake, Randolph; Tong, Frank; Ling, Sam

    2015-12-16

    Humans reliably recognize faces across a range of viewpoints, but the neural substrates supporting this ability remain unclear. Recent work suggests that neural selectivity to mirror-symmetric viewpoints of faces, found across a large network of visual areas, may constitute a key computational step in achieving full viewpoint invariance. In this study, we used repetitive transcranial magnetic stimulation (rTMS) to test the hypothesis that the occipital face area (OFA), putatively a key node in the face network, plays a causal role in face viewpoint symmetry perception. Each participant underwent both offline rTMS to the right OFA and sham stimulation, preceding blocks of behavioral trials. After each stimulation period, the participant performed one of two behavioral tasks involving presentation of faces in the peripheral visual field: (1) judging the viewpoint symmetry; or (2) judging the angular rotation. rTMS applied to the right OFA significantly impaired performance in both tasks when stimuli were presented in the contralateral, left visual field. Interestingly, however, rTMS had a differential effect on the two tasks performed ipsilaterally. Although viewpoint symmetry judgments were significantly disrupted, we observed no effect on the angle judgment task. This interaction, caused by ipsilateral rTMS, provides support for models emphasizing the role of interhemispheric crosstalk in the formation of viewpoint-invariant face perception. Faces are among the most salient objects we encounter during our everyday activities. Moreover, we are remarkably adept at identifying people at a glance, despite the diversity of viewpoints during our social encounters. Here, we investigate the cortical mechanisms underlying this ability by focusing on effects of viewpoint symmetry, i.e., the invariance of neural responses to mirror-symmetric facial viewpoints. We did this by temporarily disrupting neural processing in the occipital face area (OFA) using transcranial magnetic

  11. Overview of telepathology, virtual microscopy, and whole slide imaging: prospects for the future.

    PubMed

    Weinstein, Ronald S; Graham, Anna R; Richter, Lynne C; Barker, Gail P; Krupinski, Elizabeth A; Lopez, Ana Maria; Erps, Kristine A; Bhattacharyya, Achyut K; Yagi, Yukako; Gilbertson, John R

    2009-08-01

    Telepathology, the practice of pathology at a long distance, has advanced continuously since 1986. Today, fourth-generation telepathology systems, so-called virtual slide telepathology systems, are being used for education applications. Both conventional and innovative surgical pathology diagnostic services are being designed and implemented as well. The technology has been commercialized by more than 30 companies in Asia, the United States, and Europe. Early adopters of telepathology have been laboratories with special challenges in providing anatomic pathology services, ranging from the need to provide anatomic pathology services at great distances to the use of the technology to increase efficiency of services between hospitals less than a mile apart. As to what often happens in medicine, early adopters of new technologies are professionals who create model programs that are successful and then stimulate the creation of infrastructure (ie, reimbursement, telecommunications, information technologies, and so on) that forms the platforms for entry of later, mainstream, adopters. The trend at medical schools, in the United States, is to go entirely digital for their pathology courses, discarding their student light microscopes, and building virtual slide laboratories. This may create a generation of pathology trainees who prefer digital pathology imaging over the traditional hands-on light microscopy. The creation of standards for virtual slide telepathology is early in its development but accelerating. The field of telepathology has now reached a tipping point at which major corporations now investing in the technology will insist that standards be created for pathology digital imaging as a value added business proposition. A key to success in teleradiology, already a growth industry, has been the implementation of standards for digital radiology imaging. Telepathology is already the enabling technology for new, innovative laboratory services. Examples include STAT

  12. Gestational surrogacy: Viewpoint of Iranian infertile women.

    PubMed

    Rahmani, Azad; Sattarzadeh, Nilofar; Gholizadeh, Leila; Sheikhalipour, Zahra; Allahbakhshian, Atefeh; Hassankhani, Hadi

    2011-09-01

    Surrogacy is a popular form of assisted reproductive technology of which only gestational form is approved by most of the religious scholars in Iran. Little evidence exists about the Iranian infertile women's viewpoint regarding gestational surrogacy. To assess the viewpoint of Iranian infertile women toward gestational surrogacy. This descriptive study was conducted at the infertility clinic of Tabriz University of Medical Sciences, Iran. The study sample consisted of 238 infertile women who were selected using the eligible sampling method. Data were collected by using a researcher developed questionnaire that included 25 items based on a five-point Likert scale. Data analysis was conducted by SPSS statistical software using descriptive statistics. Viewpoint of 214 women (89.9%) was positive. 36 (15.1%) women considered gestational surrogacy against their religious beliefs; 170 women (71.4%) did not assume the commissioning couple as owners of the baby; 160 women (67.2%) said that children who were born through surrogacy would better not know about it; and 174 women (73.1%) believed that children born through surrogacy will face mental problems. Iranian infertile women have positive viewpoint regarding the surrogacy. However, to increase the acceptability of surrogacy among infertile women, further efforts are needed.

  13. Gestational surrogacy: Viewpoint of Iranian infertile women

    PubMed Central

    Rahmani, Azad; Sattarzadeh, Nilofar; Gholizadeh, Leila; Sheikhalipour, Zahra; Allahbakhshian, Atefeh; Hassankhani, Hadi

    2011-01-01

    BACKGROUND: Surrogacy is a popular form of assisted reproductive technology of which only gestational form is approved by most of the religious scholars in Iran. Little evidence exists about the Iranian infertile women's viewpoint regarding gestational surrogacy. AIM: To assess the viewpoint of Iranian infertile women toward gestational surrogacy. SETTING AND DESIGN: This descriptive study was conducted at the infertility clinic of Tabriz University of Medical Sciences, Iran. MATERIALS AND METHODS: The study sample consisted of 238 infertile women who were selected using the eligible sampling method. Data were collected by using a researcher developed questionnaire that included 25 items based on a five-point Likert scale. STATISTICAL ANALYSIS: Data analysis was conducted by SPSS statistical software using descriptive statistics. RESULTS: Viewpoint of 214 women (89.9%) was positive. 36 (15.1%) women considered gestational surrogacy against their religious beliefs; 170 women (71.4%) did not assume the commissioning couple as owners of the baby; 160 women (67.2%) said that children who were born through surrogacy would better not know about it; and 174 women (73.1%) believed that children born through surrogacy will face mental problems. CONCLUSION: Iranian infertile women have positive viewpoint regarding the surrogacy. However, to increase the acceptability of surrogacy among infertile women, further efforts are needed. PMID:22346081

  14. Multimodal Image-Based Virtual Reality Presurgical Simulation and Evaluation for Trigeminal Neuralgia and Hemifacial Spasm.

    PubMed

    Yao, Shujing; Zhang, Jiashu; Zhao, Yining; Hou, Yuanzheng; Xu, Xinghua; Zhang, Zhizhong; Kikinis, Ron; Chen, Xiaolei

    2018-05-01

    To address the feasibility and predictive value of multimodal image-based virtual reality in detecting and assessing features of neurovascular confliction (NVC), particularly regarding the detection of offending vessels, degree of compression exerted on the nerve root, in patients who underwent microvascular decompression for nonlesional trigeminal neuralgia and hemifacial spasm (HFS). This prospective study includes 42 consecutive patients who underwent microvascular decompression for classic primary trigeminal neuralgia or HFS. All patients underwent preoperative 1.5-T magnetic resonance imaging (MRI) with T2-weighted three-dimensional (3D) sampling perfection with application-optimized contrasts by using different flip angle evolutions, 3D time-of-flight magnetic resonance angiography, and 3D T1-weighted gadolinium-enhanced sequences in combination, whereas 2 patients underwent extra experimental preoperative 7.0-T MRI scans with the same imaging protocol. Multimodal MRIs were then coregistered with open-source software 3D Slicer, followed by 3D image reconstruction to generate virtual reality (VR) images for detection of possible NVC in the cerebellopontine angle. Evaluations were performed by 2 reviewers and compared with the intraoperative findings. For detection of NVC, multimodal image-based VR sensitivity was 97.6% (40/41) and specificity was 100% (1/1). Compared with the intraoperative findings, the κ coefficients for predicting the offending vessel and the degree of compression were >0.75 (P < 0.001). The 7.0-T scans have a clearer view of vessels in the cerebellopontine angle, which may have significant impact on detection of small-caliber offending vessels with relatively slow flow speed in cases of HFS. Multimodal image-based VR using 3D sampling perfection with application-optimized contrasts by using different flip angle evolutions in combination with 3D time-of-flight magnetic resonance angiography sequences proved to be reliable in detecting NVC

  15. VIRTUAL FRAME BUFFER INTERFACE

    NASA Technical Reports Server (NTRS)

    Wolfe, T. L.

    1994-01-01

    Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.

  16. Using turbulence scintillation to assist object ranging from a single camera viewpoint.

    PubMed

    Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C

    2018-03-20

    Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.

  17. Optimization of window settings for standard and advanced virtual monoenergetic imaging in abdominal dual-energy CT angiography.

    PubMed

    Caruso, Damiano; Parinella, Ashley H; Schoepf, U Joseph; Stroebel, Maxwell H; Mangold, Stefanie; Wichmann, Julian L; Varga-Szemes, Akos; Ball, B Devon; De Santis, Domenico; Laghi, Andrea; De Cecco, Carlo N

    2017-03-01

    To determine the optimal window setting for displaying virtual monoenergetic reconstructions of third generation dual-source, dual-energy CT (DECT) angiography of the abdomen. Forty-five patients were evaluated with DECT angiography (90/150 kV, 180/90 ref. mAs). Three datasets were reconstructed: standard linear blending (M_0.6), 70 keV traditional virtual monoenergetic (M70), and 40 keV advanced noise-optimized virtual monoenergetic (M40+). The best window setting (width and level, W/L) was assessed by two blinded observers and was correlated with aortic attenuation to obtain the Optimized W/L setting (O-W/L). Subjective image quality was assessed, and vessel diameters were measured to determine any possible influences between different W/L settings. Repeated measures of variance were used to evaluate comparison of W/L values, image quality, and vessel sizing between M_0.6, M70, and M40+. The Best W/L (B-W/L) for M70 and M40+ was 880/280 and 1410/450, respectively. Results from regression analysis inferred an O-W/L of 850/270 for M70 and 1350/430 for M40+. Significant differences for W and L were found between the Best and the Optimized W/L for M40+, and between M70 and M40+ for both the Best and Optimized W/L. No significant differences for vessel measurements were found using the O-W/L for M40+ compared to the standard M_0.6 (p ≥ 0.16), and significant differences were observed when using the B-W/L with M40+ compared to M_0.6 (p ≤ 0.04). In order to optimize virtual monoenergetic imaging with both traditional M70 and advanced M40+, adjusting the W/L settings is necessary. Our results suggest a W/L setting of 850/270 for M70 and 1350/430 for M40+.

  18. Computer Vision Assisted Virtual Reality Calibration

    NASA Technical Reports Server (NTRS)

    Kim, W.

    1999-01-01

    A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.

  19. The high throughput virtual slit enables compact, inexpensive Raman spectral imagers

    NASA Astrophysics Data System (ADS)

    Gooding, Edward; Deutsch, Erik R.; Huehnerhoff, Joseph; Hajian, Arsen R.

    2018-02-01

    Raman spectral imaging is increasingly becoming the tool of choice for field-based applications such as threat, narcotics and hazmat detection; air, soil and water quality monitoring; and material ID. Conventional fiber-coupled point source Raman spectrometers effectively interrogate a small sample area and identify bulk samples via spectral library matching. However, these devices are very slow at mapping over macroscopic areas. In addition, the spatial averaging performed by instruments that collect binned spectra, particularly when used in combination with orbital raster scanning, tends to dilute the spectra of trace particles in a mixture. Our design, employing free space line illumination combined with area imaging, reveals both the spectral and spatial content of heterogeneous mixtures. This approach is well suited to applications such as detecting explosives and narcotics trace particle detection in fingerprints. The patented High Throughput Virtual Slit1 is an innovative optical design that enables compact, inexpensive handheld Raman spectral imagers. HTVS-based instruments achieve significantly higher spectral resolution than can be obtained with conventional designs of the same size. Alternatively, they can be used to build instruments with comparable resolution to large spectrometers, but substantially smaller size, weight and unit cost, all while maintaining high sensitivity. When used in combination with laser line imaging, this design eliminates sample photobleaching and unwanted photochemistry while greatly enhancing mapping speed, all with high selectivity and sensitivity. We will present spectral image data and discuss applications that are made possible by low cost HTVS-enabled instruments.

  20. Virtual First Impressions

    ERIC Educational Resources Information Center

    Bergren, Martha Dewey

    2005-01-01

    Frequently, a nurse's first and only contact with a graduate school, legislator, public health official, professional organization, or school nursing colleague is made through e-mail. The format, the content, and the appearance of the e-mail create a virtual first impression. Nurses can manage their image and the image of the profession by…

  1. Information Viewpoints and Geoscience Service Architectures

    NASA Astrophysics Data System (ADS)

    Cox, S. J.

    2007-12-01

    When dealing with earth science data, different use-cases may require different views of the underlying information. At a basic level, data generation and initial assimilation generally involves dealing with event-based types and different granularity than data organized for processing, often on a grid, while further downstream the desired result of most scientific exercises is an interpretation view characterized by high semantic content and small size. The stages often map reasonably well onto the basic meta-models of Observation, Coverage and Feature provided by the OGC/ISO 19100 framework, and the matching service interfaces (SOS, WCS, WFS). However, on closer inspection of common use-cases, the vision of the Observation viewpoint as most primitive and the Feature viewpoint as most evolved does not consistently stand up. Furthermore, common discovery and access routes may involve traversing associations between instances using different viewpoints. These considerations lead to information (and thus service) composition arrangements with a variety of data flows. For example, an observation service may obtain its result data from a coverage service, while another coverage may be composed from multiple atomic observations; observations are often discovered through their association with a sampling-feature such as a cruise or borehole, or with a sensor platform such as a specific satellite whose description is available from a strongly governed register. The relationship of service instances to data stores (or other sources) is also not one-to-one, as multiple views of the same data are frequently involved. Useful service profiles may thus imply specific service architectures, and requirement to transform between viewpoints becomes almost ubiquitous. Adherence to a sound underlying meta-model for both data and services is a key enabler.

  2. Integration of the Shuttle RMS/CBM Positioning Virtual Environment Simulation

    NASA Technical Reports Server (NTRS)

    Dumas, Joseph D.

    1996-01-01

    Constructing the International Space Station, or other structures, in space presents a number of problems. In particular, payload restrictions for the Space Shuttle and other launch mechanisms prohibit assembly of large space-based structures on Earth. Instead, a number of smaller modules must be boosted into orbit separately and then assembled to form the final structure. The assembly process is difficult, as docking interfaces such as Common Berthing Mechanisms (CBMS) must be precisely positioned relative to each other to be within the "capture envelope" (approximately +/- 1 inch and +/- 0.3 degrees from the nominal position) and attach properly. In the case of the Space Station, the docking mechanisms are to be positioned robotically by an astronaut using the 55-foot-long Remote Manipulator System (RMS) robot arm. Unfortunately, direct visual or video observation of the placement process is difficult or impossible in many scenarios. One method that has been tested for aligning the CBMs uses a boresighted camera mounted on one CBM to view a standard target on the opposing CBM. While this method might be sufficient to achieve proper positioning with considerable effort, it does not provide a high level of confidence that the mechanisms have been placed within capture range of each other. It also does nothing to address the risk of inadvertent contact between the CBMS, which could result in RMS control software errors. In general, constraining the operator to a single viewpoint with few, if any, depth cues makes the task much more difficult than it would be if the target could be viewed in three-dimensional space from various viewpoints. The actual work area could be viewed by an astronaut during EVA; however, it would be extremely impractical to have an astronaut control the RMS while spacewalking. On the other hand, a view of the RMS and CBMs to be positioned in a virtual environment aboard the Space Shuttle orbiter or Space Station could provide similar benefits

  3. Discrimination. Opposing Viewpoints Series.

    ERIC Educational Resources Information Center

    Williams, Mary E., Ed.

    Books in the Opposing Viewpoints series challenge readers to question their own opinions and assumptions. By reading carefully balanced views, readers confront new ideas on the topic of interest. The Civil Rights Act of 1964, which prohibited job discrimination based on age, race, religion, gender, or national origin, provided the groundwork for…

  4. Wide-angle imaging system with fiberoptic components providing angle-dependent virtual material stops

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H. (Inventor)

    1993-01-01

    A strip imaging wide angle optical system is provided. The optical system is provided with a 'virtual' material stop to avoid aberrational effects inherent in wide angle optical systems. The optical system includes a spherical mirror section for receiving light from a 180 deg strip or arc of a target image. Light received by the spherical mirror section is reflected to a frustoconical mirror section for subsequent rereflection to a row of optical fibers. Each optical fiber transmits a portion of the received light to a detector. The optical system exploits the narrow cone of acceptance associated with optical fibers to substantially eliminate vignetting effects inherent in wide angle systems. Further, the optical system exploits the narrow cone of acceptance of the optical fibers to substantially limit spherical aberration. The optical system is ideally suited for any application wherein a 180 deg strip image need be detected, and is particularly well adapted for use in hostile environments such as in planetary exploration.

  5. Ray Tracing with Virtual Objects.

    ERIC Educational Resources Information Center

    Leinoff, Stuart

    1991-01-01

    Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)

  6. Stereoscopic virtual reality models for planning tumor resection in the sellar region.

    PubMed

    Wang, Shou-sen; Zhang, Shang-ming; Jing, Jun-jie

    2012-11-28

    It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region. To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery. All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images. The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.

  7. Perspectives on High School Reform. NCREL Viewpoints, Volume 13

    ERIC Educational Resources Information Center

    Learning Point Associates / North Central Regional Educational Laboratory (NCREL), 2005

    2005-01-01

    Viewpoints is a multimedia package containing two audio CDs and a brief, informative booklet. This volume of Viewpoints focuses on issues related to high school reform. This booklet offers background information explaining the issues surrounding high school reform with perspectives from research, policy, and practice. It also provides a list of…

  8. Repeatability and Reproducibility of Virtual Subjective Refraction.

    PubMed

    Perches, Sara; Collados, M Victoria; Ares, Jorge

    2016-10-01

    To establish the repeatability and reproducibility of a virtual refraction process using simulated retinal images. With simulation software, aberrated images corresponding with each step of the refraction process were calculated following the typical protocol of conventional subjective refraction. Fifty external examiners judged simulated retinal images until the best sphero-cylindrical refraction and the best visual acuity were achieved starting from the aberrometry data of three patients. Data analyses were performed to assess repeatability and reproducibility of the virtual refraction as a function of pupil size and aberrometric profile of different patients. SD values achieved in three components of refraction (M, J0, and J45) are lower than 0.25D in repeatability analysis. Regarding reproducibility, we found SD values lower than 0.25D in the most cases. When the results of virtual refraction with different pupil diameters (4 and 6 mm) were compared, the mean of differences (MoD) obtained were not clinically significant (less than 0.25D). Only one of the aberrometry profiles with high uncorrected astigmatism shows poor results for the M component in reproducibility and pupil size dependence analysis. In all cases, vision achieved was better than 0 logMAR. A comparison between the compensation obtained with virtual and conventional subjective refraction was made as an example of this application, showing good quality retinal images in both processes. The present study shows that virtual refraction has similar levels of precision as conventional subjective refraction. Moreover, virtual refraction has also shown that when high low order astigmatism is present, the refraction result is less precise and highly dependent on pupil size.

  9. Quantifying metal artefact reduction using virtual monochromatic dual-layer detector spectral CT imaging in unilateral and bilateral total hip prostheses.

    PubMed

    Wellenberg, R H H; Boomsma, M F; van Osch, J A C; Vlassenbroek, A; Milles, J; Edens, M A; Streekstra, G J; Slump, C H; Maas, M

    2017-03-01

    To quantify the impact of prosthesis material and design on the reduction of metal artefacts in total hip arthroplasties using virtual monochromatic dual-layer detector Spectral CT imaging. The water-filled total hip arthroplasty phantom was scanned on a novel 128-slice Philips IQon dual-layer detector Spectral CT scanner at 120-kVp and 140-kVp at a standard computed tomography dose index of 20.0mGy. Several unilateral and bilateral hip prostheses consisting of different metal alloys were inserted and combined which were surrounded by 18 hydroxyapatite calcium carbonate pellets representing bone. Images were reconstructed with iterative reconstruction and analysed at monochromatic energies ranging from 40 to 200keV. CT numbers in Hounsfield Units (HU), noise measured as the standard deviation in HU, signal-to-noise-ratios (SNRs) and contrast-to-noise-ratios (CNRs) were analysed within fixed regions-of-interests placed in and around the pellets. In 70 and 74keV virtual monochromatic images the CT numbers of the pellets were similar to 120-kVp and 140-kVp polychromatic results, therefore serving as reference. A separation into three categories of metal artefacts was made (no, mild/moderate and severe) where pellets were categorized based on HU deviations. At high keV values overall image contrast was reduced. For mild/moderate artefacts, the highest average CNRs were attained with virtual monochromatic 130keV images, acquired at 140-kVp. Severe metal artefacts were not reduced. In 130keV images, only mild/moderate metal artefacts were significantly reduced compared to 70 and 74keV images. Deviations in CT numbers, noise, SNRs and CNRs due to metal artefacts were decreased with respectively 64%, 57%, 62% and 63% (p<0.001) compared to unaffected pellets. Optimal keVs, based on CNRs, for different unilateral and bilateral metal hip prostheses consisting of different metal alloys varied from 74 to 150keV. The Titanium alloy resulted in less severe artefacts and were

  10. Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image

    PubMed Central

    Wen, Wei; Khatibi, Siamak

    2017-01-01

    Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459

  11. Virtual Whipple: preoperative surgical planning with volume-rendered MDCT images to identify arterial variants relevant to the Whipple procedure.

    PubMed

    Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B

    2007-05-01

    The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.

  12. Virtual Ed. Faces Sharp Criticism

    ERIC Educational Resources Information Center

    Quillen, Ian

    2011-01-01

    It's been a rough time for the image of K-12 virtual education. Studies in Colorado and Minnesota have suggested that full-time online students are struggling to match the achievement levels of their peers in brick-and-mortar schools. Articles in "The New York Times" questioned not only the academic results for students in virtual schools, but…

  13. Virtuous and vicious virtual water trade with application to Italy.

    PubMed

    Winter, Julia Anna; Allamano, Paola; Claps, Pierluigi

    2014-01-01

    The current trade of agricultural goods, with connections involving all continents, entails for global exchanges of "virtual" water, i.e. water used in the production process of alimentary products, but not contained within. Each trade link translates into a corresponding virtual water trade, allowing quantification of import and export fluxes of virtual water. The assessment of the virtual water import for a given nation, compared to the national consumption, could give an approximate idea of the country's reliance on external resources from the food and the water resources point of view. A descriptive approach to the understanding of a nation's degree of dependency from overseas food and water resources is first proposed, and indices of water trade virtuosity, as opposed to inefficiency, are devised. Such indices are based on the concepts of self-sufficiency and relative export, computed systematically on all products from the FAOSTAT database, taking Italy as the first case study. Analysis of time series of the self-sufficiency and relative export can demonstrate effects of market tendencies and influence water-related policies at the international level. The goal of this approach is highlighting incongruent terms in the virtual water balances by the viewpoint of single products. Specific products, which are here referred to as "swap products", are in fact identified as those that lead to inefficiencies in the virtual water balance due to their contemporaneously high import and export. The inefficiencies due to the exchanges of the same products between two nations are calculated in terms of virtual water volumes. Furthermore, the cases of swap products are investigated by computing two further indexes denoting the ratio of virtual water exchanged in the swap and the ratio of the economic values of the swapped products. The analysis of these figures can help examine the reasons behind the swap phenomenon in trade.

  14. Virtual monochromatic spectral imaging with fast kilovoltage switching: reduction of metal artifacts at CT.

    PubMed

    Pessis, Eric; Campagna, Raphaël; Sverzut, Jean-Michel; Bach, Fabienne; Rodallec, Mathieu; Guerini, Henri; Feydy, Antoine; Drapé, Jean-Luc

    2013-01-01

    With arthroplasty being increasingly used to relieve joint pain, imaging of patients with metal implants can represent a significant part of the clinical work load in the radiologist's daily practice. Computed tomography (CT) plays an important role in the postoperative evaluation of patients who are suspected of having metal prosthesis-related problems such as aseptic loosening, bone resorption or osteolysis, infection, dislocation, metal hardware failure, or periprosthetic bone fracture. Despite advances in detector technology and computer software, artifacts from metal implants can seriously degrade the quality of CT images, sometimes to the point of making them diagnostically unusable. Several factors may help reduce the number and severity of artifacts at multidetector CT, including decreasing the detector collimation and pitch, increasing the kilovolt peak and tube charge, and using appropriate reconstruction algorithms and section thickness. More recently, dual-energy CT has been proposed as a means of reducing beam-hardening artifacts. The use of dual-energy CT scanners allows the synthesis of virtual monochromatic spectral (VMS) images. Monochromatic images depict how the imaged object would look if the x-ray source produced x-ray photons at only a single energy level. For this reason, VMS imaging is expected to provide improved image quality by reducing beam-hardening artifacts.

  15. Virtual and augmented medical imaging environments: enabling technology for minimally invasive cardiac interventional guidance.

    PubMed

    Linte, Cristian A; White, James; Eagleson, Roy; Guiraudon, Gérard M; Peters, Terry M

    2010-01-01

    Virtual and augmented reality environments have been adopted in medicine as a means to enhance the clinician's view of the anatomy and facilitate the performance of minimally invasive procedures. Their value is truly appreciated during interventions where the surgeon cannot directly visualize the targets to be treated, such as during cardiac procedures performed on the beating heart. These environments must accurately represent the real surgical field and require seamless integration of pre- and intra-operative imaging, surgical tracking, and visualization technology in a common framework centered around the patient. This review begins with an overview of minimally invasive cardiac interventions, describes the architecture of a typical surgical guidance platform including imaging, tracking, registration and visualization, highlights both clinical and engineering accuracy limitations in cardiac image guidance, and discusses the translation of the work from the laboratory into the operating room together with typically encountered challenges.

  16. The use of virtual reality in the study, assessment, and treatment of body image in eating disorders and nonclinical samples: a review of the literature.

    PubMed

    Ferrer-García, Marta; Gutiérrez-Maldonado, José

    2012-01-01

    This article reviews research into the use of virtual reality in the study, assessment, and treatment of body image disturbances in eating disorders and nonclinical samples. During the last decade, virtual reality has emerged as a technology that is especially suitable not only for the assessment of body image disturbances but also for its treatment. Indeed, several virtual environment-based software systems have been developed for this purpose. Furthermore, virtual reality seems to be a good alternative to guided imagery and in vivo exposure, and is therefore very useful for studies that require exposure to life-like situations but which are difficult to conduct in the real world. Nevertheless, review highlights the lack of published controlled studies and the presence of methodological drawbacks that should be considered in future studies. This article also discusses the implications of the results obtained and proposes directions for future research. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Virtual-optical information security system based on public key infrastructure

    NASA Astrophysics Data System (ADS)

    Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben

    2005-01-01

    A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.

  18. Body Image and Anti-Fat Attitudes: An Experimental Study Using a Haptic Virtual Reality Environment to Replicate Human Touch.

    PubMed

    Tremblay, Line; Roy-Vaillancourt, Mélina; Chebbi, Brahim; Bouchard, Stéphane; Daoust, Michael; Dénommée, Jessica; Thorpe, Moriah

    2016-02-01

    It is well documented that anti-fat attitudes influence the interactions individuals have with overweight people. However, testing attitudes through self-report measures is challenging. In the present study, we explore the use of a haptic virtual reality environment to physically interact with overweight virtual human (VH). We verify the hypothesis that duration and strength of virtual touch vary according to the characteristics of VH in ways similar to those encountered from interaction with real people in anti-fat attitude studies. A group of 61 participants were randomly assigned to one of the experimental conditions involving giving a virtual hug to a female or a male VH of either normal or overweight. We found significant associations between body image satisfaction and anti-fat attitudes and sex differences on these measures. We also found a significant interaction effect of the sex of the participants, sex of the VH, and the body size of the VH. Female participants hugged longer the overweight female VH than overweight male VH. Male participants hugged longer the normal-weight VH than the overweight VH. We conclude that virtual touch is a promising method of measuring attitudes, emotion and social interactions.

  19. Method for Correcting Control Surface Angle Measurements in Single Viewpoint Photogrammetry

    NASA Technical Reports Server (NTRS)

    Burner, Alpheus W. (Inventor); Barrows, Danny A. (Inventor)

    2006-01-01

    A method of determining a corrected control surface angle for use in single viewpoint photogrammetry to correct control surface angle measurements affected by wing bending. First and second visual targets are spaced apart &om one another on a control surface of an aircraft wing. The targets are positioned at a semispan distance along the aircraft wing. A reference target separation distance is determined using single viewpoint photogrammetry for a "wind off condition. An apparent target separation distance is then computed for "wind on." The difference between the reference and apparent target separation distances is minimized by recomputing the single viewpoint photogrammetric solution for incrementally changed values of target semispan distances. A final single viewpoint photogrammetric solution is then generated that uses the corrected semispan distance that produced the minimized difference between the reference and apparent target separation distances. The final single viewpoint photogrammetric solution set is used to determine the corrected control surface angle.

  20. Head Mounted Displays for Virtual Reality

    DTIC Science & Technology

    1993-02-01

    Produce an Image of Infinity 9 3 The Naval Ocean Systems Center HMD with Front-Mounted CRTs 10 4 The VR Group HMD with Side-Mounted CRTs. The Image is...Convergence Angles 34 vii SECTION 1 INTRODUCTION One of the goals in the development of Virtual Reality ( VR ) is to achieve "total immersion" where one...become transported out of the real world and into the virtual world. The developers of VR have utilized the head mounted display (HMD) as a means of

  1. A novel augmented reality system of image projection for image-guided neurosurgery.

    PubMed

    Mahvash, Mehran; Besharati Tabrizi, Leila

    2013-05-01

    Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.

  2. Real-time interactive virtual tour on the World Wide Web (WWW)

    NASA Astrophysics Data System (ADS)

    Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi

    2003-12-01

    Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.

  3. A Discussion of Virtual Reality As a New Tool for Training Healthcare Professionals.

    PubMed

    Fertleman, Caroline; Aubugeau-Williams, Phoebe; Sher, Carmel; Lim, Ai-Nee; Lumley, Sophie; Delacroix, Sylvie; Pan, Xueni

    2018-01-01

    Virtual reality technology is an exciting and emerging field with vast applications. Our study sets out the viewpoint that virtual reality software could be a new focus of direction in the development of training tools in medical education. We carried out a panel discussion at the Center for Behavior Change 3rd Annual Conference, prompted by the study, "The Responses of Medical General Practitioners to Unreasonable Patient Demand for Antibiotics--A Study of Medical Ethics Using Immersive Virtual Reality" (1). In Pan et al.'s study, 21 general practitioners (GPs) and GP trainees took part in a videoed, 15-min virtual reality scenario involving unnecessary patient demands for antibiotics. This paper was discussed in-depth at the Center for Behavior Change 3rd Annual Conference; the content of this paper is a culmination of findings and feedback from the panel discussion. The experts involved have backgrounds in virtual reality, general practice, medicines management, medical education and training, ethics, and philosophy. Virtual reality is an unexplored methodology to instigate positive behavioral change among clinicians where other methods have been unsuccessful, such as antimicrobial stewardship. There are several arguments in favor of use of virtual reality in medical education: it can be used for "difficult to simulate" scenarios and to standardize a scenario, for example, for use in exams. However, there are limitations to its usefulness because of the cost implications and the lack of evidence that it results in demonstrable behavior change.

  4. Clinical applications of virtual navigation bronchial intervention.

    PubMed

    Kajiwara, Naohiro; Maehara, Sachio; Maeda, Junichi; Hagiwara, Masaru; Okano, Tetsuya; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko

    2018-01-01

    In patients with bronchial tumors, we frequently consider endoscopic treatment as the first treatment of choice. All computed tomography (CT) must satisfy several conditions necessary to analyze images by Synapse Vincent. To select safer and more precise approaches for patients with bronchial tumors, we determined the indications and efficacy of virtual navigation intervention for the treatment of bronchial tumors. We examined the efficacy of virtual navigation bronchial intervention for the treatment of bronchial tumors located at a variety of sites in the tracheobronchial tree using a high-speed 3-dimensional (3D) image analysis system, Synapse Vincent. Constructed images can be utilized to decide on the simulation and interventional strategy as well as for navigation during interventional manipulation in two cases. Synapse Vincent was used to determine the optimal planning of virtual navigation bronchial intervention. Moreover, this system can detect tumor location and alsodepict surrounding tissues, quickly, accurately, and safely. The feasibility and safety of Synapse Vincent in performing useful preoperative simulation and navigation of surgical procedures can lead to safer, more precise, and less invasion for the patient, and makes it easy to construct an image, depending on the purpose, in 5-10 minutes using Synapse Vincent. Moreover, if the lesion is in the parenchyma or sub-bronchial lumen, it helps to perform simulation with virtual skeletal subtraction to estimate potential lesion movement. By using virtual navigation system for simulation, bronchial intervention was performed with no complications safely and precisely. Preoperative simulation using virtual navigation bronchial intervention reduces the surgeon's stress levels, particularly when highly skilled techniques are needed to operate on lesions. This task, including both preoperative simulation and intraoperative navigation, leads to greater safety and precision. These technological instruments

  5. Comparison of binary mask defect printability analysis using virtual stepper system and aerial image microscope system

    NASA Astrophysics Data System (ADS)

    Phan, Khoi A.; Spence, Chris A.; Dakshina-Murthy, S.; Bala, Vidya; Williams, Alvina M.; Strener, Steve; Eandi, Richard D.; Li, Junling; Karklin, Linard

    1999-12-01

    As advanced process technologies in the wafer fabs push the patterning processes toward lower k1 factor for sub-wavelength resolution printing, reticles are required to use optical proximity correction (OPC) and phase-shifted mask (PSM) for resolution enhancement. For OPC/PSM mask technology, defect printability is one of the major concerns. Current reticle inspection tools available on the market sometimes are not capable of consistently differentiating between an OPC feature and a true random defect. Due to the process complexity and high cost associated with the making of OPC/PSM reticles, it is important for both mask shops and lithography engineers to understand the impact of different defect types and sizes to the printability. Aerial Image Measurement System (AIMS) has been used in the mask shops for a number of years for reticle applications such as aerial image simulation and transmission measurement of repaired defects. The Virtual Stepper System (VSS) provides an alternative method to do defect printability simulation and analysis using reticle images captured by an optical inspection or review system. In this paper, pre- programmed defects and repairs from a Defect Sensitivity Monitor (DSM) reticle with 200 nm minimum features (at 1x) will be studied for printability. The simulated resist lines by AIMS and VSS are both compared to SEM images of resist wafers qualitatively and quantitatively using CD verification.Process window comparison between unrepaired and repaired defects for both good and bad repair cases will be shown. The effect of mask repairs to resist pattern images for the binary mask case will be discussed. AIMS simulation was done at the International Sematech, Virtual stepper simulation at Zygo and resist wafers were processed at AMD-Submicron Development Center using a DUV lithographic process for 0.18 micrometer Logic process technology.

  6. Virtual reality in surgery and medicine.

    PubMed

    Chinnock, C

    1994-01-01

    This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time

  7. A method for fast automated microscope image stitching.

    PubMed

    Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong

    2013-05-01

    Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Socially Important Faces Are Processed Preferentially to Other Familiar and Unfamiliar Faces in a Priming Task across a Range of Viewpoints

    PubMed Central

    Keyes, Helen; Zalicks, Catherine

    2016-01-01

    Using a priming paradigm, we investigate whether socially important faces are processed preferentially compared to other familiar and unfamiliar faces, and whether any such effects are affected by changes in viewpoint. Participants were primed with frontal images of personally familiar, famous or unfamiliar faces, and responded to target images of congruent or incongruent identity, presented in frontal, three quarter or profile views. We report that participants responded significantly faster to socially important faces (a friend’s face) compared to other highly familiar (famous) faces or unfamiliar faces. Crucially, responses to famous and unfamiliar faces did not differ. This suggests that, when presented in the context of a socially important stimulus, socially unimportant familiar faces (famous faces) are treated in a similar manner to unfamiliar faces. This effect was not tied to viewpoint, and priming did not affect socially important face processing differently to other faces. PMID:27219101

  9. Hierarchical image-based rendering using texture mapping hardware

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, N

    1999-01-15

    Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create ''reprojected'' output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in themore » final view, for hardware shading.« less

  10. Virtual microscopy in virtual tumor banking.

    PubMed

    Isabelle, M; Teodorovic, I; Oosterhuis, J W; Riegman, P H J; Passioukov, A; Lejeune, S; Therasse, P; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; Van Damme, B; Van de Vijver, M; Van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; López-Guerrero, J A; Llombart-Bosch, A; Carbone, A; Gloghini, A; Van Veen, E B

    2006-01-01

    Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated virtual microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting biorepositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).

  11. Towards a Normalised 3D Geovisualisation: The Viewpoint Management

    NASA Astrophysics Data System (ADS)

    Neuville, R.; Poux, F.; Hallot, P.; Billen, R.

    2016-10-01

    This paper deals with the viewpoint management in 3D environments considering an allocentric environment. The recent advances in computer sciences and the growing number of affordable remote sensors lead to impressive improvements in the 3D visualisation. Despite some research relating to the analysis of visual variables used in 3D environments, we notice that it lacks a real standardisation of 3D representation rules. In this paper we study the "viewpoint" as being the first considered parameter for a normalised visualisation of 3D data. Unlike in a 2D environment, the viewing direction is not only fixed in a top down direction in 3D. A non-optimal camera location means a poor 3D representation in terms of relayed information. Based on this statement we propose a model based on the analysis of the computational display pixels that determines a viewpoint maximising the relayed information according to one kind of query. We developed an OpenGL prototype working on screen pixels that allows to determine the optimal camera location based on a screen pixels colour algorithm. The viewpoint management constitutes a first step towards a normalised 3D geovisualisation.

  12. Use of the Remote Access Virtual Environment Network (RAVEN) for coordinated IVA-EVA astronaut training and evaluation.

    PubMed

    Cater, J P; Huffman, S D

    1995-01-01

    This paper presents a unique virtual reality training and assessment tool developed under a NASA grant, "Research in Human Factors Aspects of Enhanced Virtual Environments for Extravehicular Activity (EVA) Training and Simulation." The Remote Access Virtual Environment Network (RAVEN) was created to train and evaluate the verbal, mental and physical coordination required between the intravehicular (IVA) astronaut operating the Remote Manipulator System (RMS) arm and the EVA astronaut standing in foot restraints on the end of the RMS. The RAVEN system currently allows the EVA astronaut to approach the Hubble Space Telescope (HST) under control of the IVA astronaut and grasp, remove, and replace the Wide Field Planetary Camera drawer from its location in the HST. Two viewpoints, one stereoscopic and one monoscopic, were created all linked by Ethernet, that provided the two trainees with the appropriate training environments.

  13. Education in America. Opposing Viewpoints.

    ERIC Educational Resources Information Center

    Cozic, Charles P., Ed.

    This book, part of a series about differing viewpoints on education in America, examines how education can be improved for this and future generations of America's youth. The following papers and their authors are included: "Public Education Needs Extensive Reform" (John Taylor Gatto); "Public Education Does Not Need Extensive…

  14. Dual-energy computed tomography in patients with cutaneous malignant melanoma: Comparison of noise-optimized and traditional virtual monoenergetic imaging.

    PubMed

    Martin, Simon S; Wichmann, Julian L; Weyer, Hendrik; Albrecht, Moritz H; D'Angelo, Tommaso; Leithner, Doris; Lenga, Lukas; Booz, Christian; Scholtz, Jan-Erik; Bodelle, Boris; Vogl, Thomas J; Hammerstingl, Renate

    2017-10-01

    The aim of this study was to investigate the impact of noise-optimized virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with cutaneous malignant melanoma at thoracoabdominal dual-energy computed tomography (DECT). Seventy-six patients (48 men; 66.6±13.8years) with metastatic cutaneous malignant melanoma underwent DECT of the thorax and abdomen. Images were post-processed with standard linear blending (M_0.6), traditional virtual monoenergetic (VMI), and VMI+ technique. VMI and VMI+ images were reconstructed in 10-keV intervals from 40 to 100keV. Attenuation measurements were performed in cutaneous melanoma lesions, as well as in regional lymph node, subcutaneous and in-transit metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Five-point scales were used to evaluate overall image quality and lesion delineation by three radiologists with different levels of experience. Objective indices SNR and CNR were highest at 40-keV VMI+ series (5.6±2.6 and 12.4±3.4), significantly superior to all other reconstructions (all P<0.001). Qualitative image parameters showed highest values for 50-keV and 60-keV VMI+ reconstructions (median 5, respectively; P≤0.019) regarding overall image quality. Moreover, qualitative assessment of lesion delineation peaked in 40-keV VMI+ (median 5) and 50-keV VMI+ (median 4; P=0.055), significantly superior to all other reconstructions (all P<0.001). Low-keV noise-optimized VMI+ reconstructions substantially increase quantitative and qualitative image parameters, as well as subjective lesion delineation compared to standard image reconstruction and traditional VMI in patients with cutaneous malignant melanoma at thoracoabdominal DECT. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. The perception of spatial layout in real and virtual worlds.

    PubMed

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  16. Observational learning from a radical-behavioristic viewpoint

    PubMed Central

    Deguchi, Hikaru

    1984-01-01

    Bandura (1972, 1977b) has argued that observational learning has some distinctive features that set it apart from the operant paradigm: (1) acquisition simply through observation, (2) delayed performance through cognitive mediation, and (3) vicarious reinforcement. The present paper first redefines those three features at the descriptive level, and then adopts a radical-behavioristic viewpoint to show how those redefined distinctive features can be explained and tested experimentally. Finally, the origin of observational learning is discussed in terms of recent data of neonatal imitation. The present analysis offers a consistent theoretical and practical understanding of observational learning from a radical-behavioristic viewpoint. PMID:22478602

  17. Development of virtual patient models for permanent implant brachytherapy Monte Carlo dose calculations: interdependence of CT image artifact mitigation and tissue assignment.

    PubMed

    Miksys, N; Xu, C; Beaulieu, L; Thomson, R M

    2015-08-07

    This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose

  18. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    NASA Astrophysics Data System (ADS)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object

  19. The creation of virtual teeth with and without tooth pathology for a virtual learning environment in dental education.

    PubMed

    de Boer, I R; Wesselink, P R; Vervoorn, J M

    2013-11-01

    To describe the development and opportunities for implementation of virtual teeth with and without pathology for use in a virtual learning environment in dental education. The creation of virtual teeth begins by scanning a tooth with a cone beam CT. The resulting scan consists of multiple two-dimensional grey-scale images. The specially designed software program ColorMapEditor connects these two-dimensional images to create a three-dimensional tooth. With this software, any aspect of the tooth can be modified, including its colour, volume, shape and density, resulting in the creation of virtual teeth of any type. This article provides examples of realistic virtual teeth with and without pathology that can be used for dental education. ColorMapEditor offers infinite possibilities to adjust and add options for the optimisation of virtual teeth. Virtual teeth have unlimited availability for dental students, allowing them to practise as often as required. Virtual teeth can be made and adjusted to any shape with any type of pathology. Further developments in software and hardware technology are necessary to refine the ability to colour and shape the interior of the pulp chamber and surface of the tooth to enable not only treatment but also diagnostics and thus create a greater degree of realism. The creation and use of virtual teeth in dental education appears to be feasible but is still in development; it offers many opportunities for the creation of teeth with various pathologies, although an evaluation of its use in dental education is still required. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. IDP camp evolvement analysis in Darfur using VHSR optical satellite image time series and scientific visualization on virtual globes

    NASA Astrophysics Data System (ADS)

    Tiede, Dirk; Lang, Stefan

    2010-11-01

    In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1) extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based globe service. In addition, means for improving overall performance of automated dwelling structures using grid computing techniques are discussed using examples from a similar study.

  1. Evaluation of three-dimensional virtual perception of garments

    NASA Astrophysics Data System (ADS)

    Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.

    2017-10-01

    In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.

  2. Virtual Reality as an Educational and Training Tool for Medicine.

    PubMed

    Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo

    2018-02-01

    Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.

  3. Viewpoints: A High-Performance High-Dimensional Exploratory Data Analysis Tool

    NASA Astrophysics Data System (ADS)

    Gazis, P. R.; Levit, C.; Way, M. J.

    2010-12-01

    Scientific data sets continue to increase in both size and complexity. In the past, dedicated graphics systems at supercomputing centers were required to visualize large data sets, but as the price of commodity graphics hardware has dropped and its capability has increased, it is now possible, in principle, to view large complex data sets on a single workstation. To do this in practice, an investigator will need software that is written to take advantage of the relevant graphics hardware. The Viewpoints visualization package described herein is an example of such software. Viewpoints is an interactive tool for exploratory visual analysis of large high-dimensional (multivariate) data. It leverages the capabilities of modern graphics boards (GPUs) to run on a single workstation or laptop. Viewpoints is minimalist: it attempts to do a small set of useful things very well (or at least very quickly) in comparison with similar packages today. Its basic feature set includes linked scatter plots with brushing, dynamic histograms, normalization, and outlier detection/removal. Viewpoints was originally designed for astrophysicists, but it has since been used in a variety of fields that range from astronomy, quantum chemistry, fluid dynamics, machine learning, bioinformatics, and finance to information technology server log mining. In this article, we describe the Viewpoints package and show examples of its usage.

  4. Deriving and Constraining 3D CME Kinematic Parameters from Multi-Viewpoint Coronagraph Images

    NASA Astrophysics Data System (ADS)

    Thompson, B. J.; Mei, H. F.; Barnes, D.; Colaninno, R. C.; Kwon, R.; Mays, M. L.; Mierla, M.; Moestl, C.; Richardson, I. G.; Verbeke, C.

    2017-12-01

    Determining the 3D properties of a coronal mass ejection using multi-viewpoint coronagraph observations can be a tremendously complicated process. There are many factors that inhibit the ability to unambiguously identify the speed, direction and shape of a CME. These factors include the need to separate the "true" CME mass from shock-associated brightenings, distinguish between non-radial or deflected trajectories, and identify asymmetric CME structures. Additionally, different measurement methods can produce different results, sometimes with great variations. Part of the reason for the wide range of values that can be reported for a single CME is due to the difficulty in determining the CME's longitude since uncertainty in the angle of the CME relative to the observing image planes results in errors in the speed and topology of the CME. Often the errors quoted in an individual study are remarkably small when compared to the range of values that are reported by different authors for the same CME. For example, two authors may report speeds of 700 +- 50 km/sec and 500+-50 km/sec for the same CME. Clearly a better understanding of the accuracy of CME measurements, and an improved assessment of the limitations of the different methods, would be of benefit. We report on a survey of CME measurements, wherein we compare the values reported by different authors and catalogs. The survey will allow us to establish typical errors for the parameters that are commonly used as inputs for CME propagation models such as ENLIL and EUHFORIA. One way modelers handle inaccuracies in CME parameters is to use an ensemble of CMEs, sampled across ranges of latitude, longitude, speed and width. The CMEs simulated in order to determine the probability of a "direct hit" and, for the cases with a "hit," derive a range of possible arrival times. Our study will provide improved guidelines for generating CME ensembles that more accurately sample across the range of plausible values.

  5. Conservatism is not the missing viewpoint for true diversity.

    PubMed

    Seibt, Beate; Waldzus, Sven; Schubert, Thomas W; Brito, Rodrigo

    2015-01-01

    The target article diagnoses a dominance of liberal viewpoints with little evidence, promotes a conservative viewpoint without defining it, and wrongly projects the U.S. liberal-conservative spectrum to the whole field of social psychology. Instead, we propose to anticipate and reduce mixing of theorizing and ideology by using definitions that acknowledge divergence in perspective, and promote representative sampling and observation of the field, as well as dialogical publication.

  6. Prospective comparison of virtual fluoroscopy to fluoroscopy and plain radiographs for placement of lumbar pedicle screws.

    PubMed

    Resnick, Daniel K

    2003-06-01

    Fluoroscopy-based frameless stereotactic systems provide feedback to the surgeon using virtual fluoroscopic images. The real-life accuracy of these virtual images has not been compared with traditional fluoroscopy in a clinical setting. We prospectively studied 23 consecutive cases. In two cases, registration errors precluded the use of virtual fluoroscopy. Pedicle probes placed with virtual fluoroscopic imaging were imaged with traditional fluoroscopy in the remaining 21 cases. Position of the probes was judged to be ideal, acceptable but not ideal, or not acceptable based on the traditional fluoroscopic images. Virtual fluoroscopy was used to place probes in for 97 pedicles from L1 to the sacrum. Eighty-eight probes were judged to be in ideal position, eight were judged to be acceptable but not ideal, and one probe was judged to be in an unacceptable position. This probe was angled toward an adjacent disc space. Therefore, 96 of 97 probes placed using virtual fluoroscopy were found to be in an acceptable position. The positive predictive value for acceptable screw placement with virtual fluoroscopy compared with traditional fluoroscopy was 99%. A probe placed with virtual fluoroscopic guidance will be judged to be in an acceptable position when imaged with traditional fluoroscopy 99% of the time.

  7. Teenage Pregnancy. Opposing Viewpoints Series.

    ERIC Educational Resources Information Center

    Thompson, Stephen P.

    Books in the Opposing Viewpoints series challenge readers to question their own opinions and assumptions. By reading carefully balanced views, readers confront new ideas on the topic of interest. Although some experts believe that the problem of teenage pregnancy has been overstated, other recent studies have led many people to believe that…

  8. Getting a handle on virtual tools: An examination of the neuronal activity associated with virtual tool use.

    PubMed

    Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A

    2018-01-31

    Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Viewpoints from generation X: a survey of candidate and associate viewpoints on resident duty-hour regulations.

    PubMed

    Underwood, Willie; Boyd, Anne J; Fletcher, Kathlyn E; Lypson, Monica L

    2004-06-01

    The American Medical Student Association, the Committee of Interns and Residents, and Public Citizen petitioned the Occupational Safety and Health Administration for national resident duty-hour limitations. Subsequently, federal legislation was introduced to limit resident duty hours. To preempt the federal government, the Accreditation of Graduate Medical Education implemented resident duty-hour guidelines. To evaluate the viewpoints and attitudes of surgical resident and staff physicians as they pertain to the national resident duty-hour guidelines, we asked attendees of the American College of Surgeons' Candidate Associate Society Forum during the American College of Surgeons Clinical Congress meeting in 2001 to complete a self-administered questionnaire. Analyses were performed to determine the frequency of response for each survey item. Eighty-six of the 102 (84%) surgeons who attended the American College of Surgeons Forum completed the survey. Most disagreed with the federal government involvement in regulating duty hours. Although most agreed that residents should not be on call more than every third night, viewpoints varied on the other duty-hour guidelines. Most (63.4%) reported that residents should work 81 to 100 hours per week, but 11% reported that residents should work > 101 hours per week and 25.6% reported that viewpoints and attitudes of surgical resident and staff physicians with regard to resident duty-hour reform. These "front line" individuals may have unique insights into the benefits and barriers of duty-hour regulations.

  10. Developing a Virtual Rock Deformation Laboratory

    NASA Astrophysics Data System (ADS)

    Zhu, W.; Ougier-simonin, A.; Lisabeth, H. P.; Banker, J. S.

    2012-12-01

    Experimental rock physics plays an important role in advancing earthquake research. Despite its importance in geophysics, reservoir engineering, waste deposits and energy resources, most geology departments in U.S. universities don't have rock deformation facilities. A virtual deformation laboratory can serve as an efficient tool to help geology students naturally and internationally learn about rock deformation. Working with computer science engineers, we built a virtual deformation laboratory that aims at fostering user interaction to facilitate classroom and outreach teaching and learning. The virtual lab is built to center around a triaxial deformation apparatus in which laboratory measurements of mechanical and transport properties such as stress, axial and radial strains, acoustic emission activities, wave velocities, and permeability are demonstrated. A student user can create her avatar to enter the virtual lab. In the virtual lab, the avatar can browse and choose among various rock samples, determine the testing conditions (pressure, temperature, strain rate, loading paths), then operate the virtual deformation machine to observe how deformation changes physical properties of rocks. Actual experimental results on the mechanical, frictional, sonic, acoustic and transport properties of different rocks at different conditions are compiled. The data acquisition system in the virtual lab is linked to the complied experimental data. Structural and microstructural images of deformed rocks are up-loaded and linked to different deformation tests. The integration of the microstructural image and the deformation data allows the student to visualize how forces reshape the structure of the rock and change the physical properties. The virtual lab is built using the Game Engine. The geological background, outstanding questions related to the geological environment, and physical and mechanical concepts associated with the problem will be illustrated on the web portal. In

  11. Does Zika Virus Cause Microcephaly - Applying the Bradford Hill Viewpoints

    PubMed Central

    Awadh, Asma; Chughtai, Abrar Ahmad; Dyda, Amalie; Sheikh, Mohamud; Heslop, David J.; MacIntyre, Chandini Raina

    2017-01-01

    Introduction: Zika virus has been documented since 1952, but been associated with mild, self-limiting disease. Zika virus is classified as an arbovirus from a family of Flaviviridae and primarily spread by Aedes Aegypti mosquitos. However, in a large outbreak in Brazil in 2015, Zika virus has been associated with microcephaly. Methods: In this review we applied the Bradford-Hill viewpoints  to investigate the association between Zika virus and microcephaly. We examined historical studies, available data and also compared historical rates of microcephaly prior to the Zika virus outbreak. The available evidence was reviewed against the Bradford Hill viewpoints. Results: All  the nine criteria were met to varying degrees: strength of association, consistency of the association, specificity, temporality, plausibility, coherence, experimental evidence, biological gradient and analogy. Conclusion: Using the Bradford Hill Viewpoints as an evaluation framework for causation is highly suggestive that the association between Zika virus and microcephaly is causal. Further studies using animal models on the viewpoints which were not as strongly fulfilled would be helpful. PMID:28357156

  12. Concept of dual-resolution light field imaging using an organic photoelectric conversion film for high-resolution light field photography.

    PubMed

    Sugimura, Daisuke; Kobayashi, Suguru; Hamamoto, Takayuki

    2017-11-01

    Light field imaging is an emerging technique that is employed to realize various applications such as multi-viewpoint imaging, focal-point changing, and depth estimation. In this paper, we propose a concept of a dual-resolution light field imaging system to synthesize super-resolved multi-viewpoint images. The key novelty of this study is the use of an organic photoelectric conversion film (OPCF), which is a device that converts spectra information of incoming light within a certain wavelength range into an electrical signal (pixel value), for light field imaging. In our imaging system, we place the OPCF having the green spectral sensitivity onto the micro-lens array of the conventional light field camera. The OPCF allows us to acquire the green spectra information only at the center viewpoint with the full resolution of the image sensor. In contrast, the optical system of the light field camera in our imaging system captures the other spectra information (red and blue) at multiple viewpoints (sub-aperture images) but with low resolution. Thus, our dual-resolution light field imaging system enables us to simultaneously capture information about the target scene at a high spatial resolution as well as the direction information of the incoming light. By exploiting these advantages of our imaging system, our proposed method enables the synthesis of full-resolution multi-viewpoint images. We perform experiments using synthetic images, and the results demonstrate that our method outperforms other previous methods.

  13. Interracial America. Opposing Viewpoints Series.

    ERIC Educational Resources Information Center

    Szumski, Bonnie, Ed.

    Books in the Opposing Viewpoints Series present debates about current issues that can be used to teach critical reading and thinking skills. The varied opinions in each book examine different aspects of a single issue. The topics covered in this volume explore the racial and ethnic tensions that concern many Americans today. The racial divide…

  14. Construction of a Virtual Scanning Electron Microscope (VSEM)

    NASA Technical Reports Server (NTRS)

    Fried, Glenn; Grosser, Benjamin

    2004-01-01

    The Imaging Technology Group (ITG) proposed to develop a Virtual SEM (VSEM) application and supporting materials as the first installed instrument in NASA s Virtual Laboratory Project. The instrument was to be a simulator modeled after an existing SEM, and was to mimic that real instrument as closely as possible. Virtual samples would be developed and provided along with the instrument, which would be written in Java.

  15. Is it possible to use highly realistic virtual reality in the elderly? A feasibility study with image-based rendering.

    PubMed

    Benoit, Michel; Guerchouche, Rachid; Petit, Pierre-David; Chapoulie, Emmanuelle; Manera, Valeria; Chaurasia, Gaurav; Drettakis, George; Robert, Philippe

    2015-01-01

    Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant's home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the "remember/know" procedure. All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the experiment across the VR

  16. Is it possible to use highly realistic virtual reality in the elderly? A feasibility study with image-based rendering

    PubMed Central

    Benoit, Michel; Guerchouche, Rachid; Petit, Pierre-David; Chapoulie, Emmanuelle; Manera, Valeria; Chaurasia, Gaurav; Drettakis, George; Robert, Philippe

    2015-01-01

    Background Virtual reality (VR) opens up a vast number of possibilities in many domains of therapy. The primary objective of the present study was to evaluate the acceptability for elderly subjects of a VR experience using the image-based rendering virtual environment (IBVE) approach and secondly to test the hypothesis that visual cues using VR may enhance the generation of autobiographical memories. Methods Eighteen healthy volunteers (mean age 68.2 years) presenting memory complaints with a Mini-Mental State Examination score higher than 27 and no history of neuropsychiatric disease were included. Participants were asked to perform an autobiographical fluency task in four conditions. The first condition was a baseline grey screen, the second was a photograph of a well-known location in the participant’s home city (FamPhoto), and the last two conditions displayed VR, ie, a familiar image-based virtual environment (FamIBVE) consisting of an image-based representation of a known landmark square in the center of the city of experimentation (Nice) and an unknown image-based virtual environment (UnknoIBVE), which was captured in a public housing neighborhood containing unrecognizable building fronts. After each of the four experimental conditions, participants filled in self-report questionnaires to assess the task acceptability (levels of emotion, motivation, security, fatigue, and familiarity). CyberSickness and Presence questionnaires were also assessed after the two VR conditions. Autobiographical memory was assessed using a verbal fluency task and quality of the recollection was assessed using the “remember/know” procedure. Results All subjects completed the experiment. Sense of security and fatigue were not significantly different between the conditions with and without VR. The FamPhoto condition yielded a higher emotion score than the other conditions (P<0.05). The CyberSickness questionnaire showed that participants did not experience sickness during the

  17. Should Lecture Recordings Be Mandated in Dental Schools? Two Viewpoints: Viewpoint 1: Lecture Recordings Should Be Mandatory in U.S. Dental Schools and Viewpoint 2: Lecture Recordings Should Not Be Mandatory in U.S. Dental Schools.

    PubMed

    Zandona, Andrea Ferreira; Kinney, Janet; Seong, WookJin; Kumar, Vandana; Bendayan, Alexander; Hewlett, Edmond

    2016-12-01

    Transcription or recording of lectures has been in use for many years, and with the availability of high-fidelity recording, the practice is now ubiquitous in higher education. Since technology has permeated education and today's tech-savvy students have expectations for on-demand learning, dental schools are motivated to record lectures, albeit with positive and negative implications. This Point/Counterpoint article addresses the question of whether lecture recording should be mandatory in U.S. dental schools. Viewpoint 1 supports the statement that lecture recording should be mandatory. Proponents of this viewpoint argue that the benefits-notably, student satisfaction and potential for improvement in student performance-outweigh concerns. Viewpoint 2 takes the opposite position, arguing that lecture recording decreases students' classroom attendance and adversely affects the morale of educators. Additional arguments against mandatory lecture recordings involve the expense of incorporating technology that requires ongoing support.

  18. Constraint, Intelligence, and Control Hierarchy in Virtual Environments. Chapter 1

    NASA Technical Reports Server (NTRS)

    Sheridan, Thomas B.

    2007-01-01

    This paper seeks to deal directly with the question of what makes virtual actors and objects that are experienced in virtual environments seem real. (The term virtual reality, while more common in public usage, is an oxymoron; therefore virtual environment is the preferred term in this paper). Reality is difficult topic, treated for centuries in those sub-fields of philosophy called ontology- "of or relating to being or existence" and epistemology- "the study of the method and grounds of knowledge, especially with reference to its limits and validity" (both from Webster s, 1965). Advances in recent decades in the technologies of computers, sensors and graphics software have permitted human users to feel present or experience immersion in computer-generated virtual environments. This has motivated a keen interest in probing this phenomenon of presence and immersion not only philosophically but also psychologically and physiologically in terms of the parameters of the senses and sensory stimulation that correlate with the experience (Ellis, 1991). The pages of the journal Presence: Teleoperators and Virtual Environments have seen much discussion of what makes virtual environments seem real (see, e.g., Slater, 1999; Slater et al. 1994; Sheridan, 1992, 2000). Stephen Ellis, when organizing the meeting that motivated this paper, suggested to invited authors that "We may adopt as an organizing principle for the meeting that the genesis of apparently intelligent interaction arises from an upwelling of constraints determined by a hierarchy of lower levels of behavioral interaction. "My first reaction was "huh?" and my second was "yeah, that seems to make sense." Accordingly the paper seeks to explain from the author s viewpoint, why Ellis s hypothesis makes sense. What is the connection of "presence" or "immersion" of an observer in a virtual environment, to "constraints" and what types of constraints. What of "intelligent interaction," and is it the intelligence of the

  19. Virtual egocenters as a function of display geometric field of view and eye station point

    NASA Technical Reports Server (NTRS)

    Psotka, Joseph

    1993-01-01

    The accurate location of one's virtual egocenter in a geometric space is of critical importance for immersion technologies. This experiment was conducted to investigate the role of field of view (FOV) and observer station points in the perception of the location of one's egocenter (the personal viewpoint) in virtual space. Rivalrous cues to the accurate location of one's egocenter may be one factor involved in simulator sickness. Fourteen subjects viewed an animated 3D model, of the room in which they sat, binocularly, from Eye Station Points (ESP) of either 300 or 800 millimeters. The display was on a 190 by 245 mm monitor, at a resolution of 320 by 200 pixels with 256 colors. They saw four models of the room designed with four geometric field of view (FOVg) conditions of 18, 48, 86, and 140 degrees. They drew the apparent paths of the camera in the room on a bitmap of the room as seen from infinity above. Large differences in the paths of the camera were seen as a function of both FOVg and ESP. Ten of the subjects were then asked to find the position for each display that minimized camera motion. The results fit well with predictions from an equation that took the ratio of human FOV (roughly 180 degrees) to FOVg times the Geometric Eye Point (GEP) of the imager: Zero Station Point = (180/FOVg)*GEP

  20. Egocentric virtual maze learning in adult survivors of childhood abuse with dissociative disorders: evidence from functional magnetic resonance imaging.

    PubMed

    Weniger, Godehard; Siemerkus, Jakob; Barke, Antonia; Lange, Claudia; Ruhleder, Mirjana; Sachsse, Ulrich; Schmidt-Samoa, Carsten; Dechent, Peter; Irle, Eva

    2013-05-30

    Present neuroimaging findings suggest two subtypes of trauma response, one characterized predominantly by hyperarousal and intrusions, and the other primarily by dissociative symptoms. The neural underpinnings of these two subtypes need to be better defined. Fourteen women with childhood abuse and the current diagnosis of dissociative amnesia or dissociative identity disorder but without posttraumatic stress disorder (PTSD) and 14 matched healthy comparison subjects underwent functional magnetic resonance imaging (fMRI) while finding their way in a virtual maze. The virtual maze presented a first-person view (egocentric), lacked any topographical landmarks and could be learned only by using egocentric navigation strategies. Participants with dissociative disorders (DD) were not impaired in learning the virtual maze when compared with controls, and showed a similar, although weaker, pattern of activity changes during egocentric learning when compared with controls. Stronger dissociative disorder severity of participants with DD was related to better virtual maze performance, and to stronger activity increase within the cingulate gyrus and the precuneus. Our results add to the present knowledge of preserved attentional and visuospatial mnemonic functioning in individuals with DD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  1. Virtual reality and interactive 3D as effective tools for medical training.

    PubMed

    Webb, George; Norcliffe, Alex; Cannings, Peter; Sharkey, Paul; Roberts, Dave

    2003-01-01

    CAVE-like displays allow a user to walk in to a virtual environment, and use natural movement to change the viewpoint of virtual objects which they can manipulate with a hand held device. This maps well to many surgical procedures offering strong potential for training and planning. These devices may be networked together allowing geographically remote users to share the interactive experience. This maps to the strong need for distance training and planning of surgeons. Our paper shows how the properties of a CAVE-Like facility can be maximised in order to provide an ideal environment for medical training. The implementation of a large 3D-eye is described. The resulting application is that of an eye that can be manipulated and examined by trainee medics under the guidance of a medical expert. The progression and effects of different ailments can be illustrated and corrective procedures, demonstrated.

  2. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  3. Immersive Virtual Reality for Visualization of Abdominal CT.

    PubMed

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E

    2013-03-28

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  4. Immersive virtual reality for visualization of abdominal CT

    NASA Astrophysics Data System (ADS)

    Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.

    2013-03-01

    Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.

  5. SU-E-J-167: Improvement of Time-Ordered Four Dimensional Cone-Beam CT; Image Mosaicing with Real and Virtual Projections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakano, M; Kida, S; Masutani, Y

    2014-06-01

    Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D

  6. Simplified Virtualization in a HEP/NP Environment with Condor

    NASA Astrophysics Data System (ADS)

    Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.

    2012-12-01

    In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.

  7. An unconventional depiction of viewpoint in rock art.

    PubMed

    Pettigrew, Jack; Scott-Virtue, Lee

    2015-01-01

    Rock art in Africa sometimes takes advantage of three-dimensional features of the rock wall, such as fissures or protuberances, that can be incorporated into the artistic composition (Lewis-Williams, 2002). More commonly, rock artists choose uniform walls on which two-dimensional depictions may represent three-dimensional figures or objects. In this report we present such a two-dimensional depiction in rock art that we think reveals an intention by the artist to represent an unusual three-dimensional viewpoint, namely, with the two human figures facing into the rock wall, instead of the accustomed Western viewpoint facing out!

  8. [Virtual bronchoscopy: the correlation between endoscopic simulation and bronchoscopic findings].

    PubMed

    Salvolini, L; Gasparini, S; Baldelli, S; Bichi Secchi, E; Amici, F

    1997-11-01

    We carried out a preliminary clinical validation of 3D spiral CT virtual endoscopic reconstructions of the tracheobronchial tree, by comparing virtual bronchoscopic images with actual endoscopic findings. Twenty-two patients with tracheobronchial disease suspected at preliminary clinical, cytopathological and plain chest film findings were submitted to spiral CT of the chest and bronchoscopy. CT was repeated after endobronchial therapy in 2 cases. Virtual endoscopic shaded-surface-display views of the tracheobronchial tree were reconstructed from reformatted CT data with an Advantage Navigator software. Virtual bronchoscopic images were preliminarily evaluated with a semi-quantitative quality score (excellent/good/fair/poor). The depiction of consecutive airway branches was then considered. Virtual bronchoscopies were finally submitted to double-blind comparison with actual endoscopies. Virtual image quality was considered excellent in 8 cases, good in 14 and fair in 2. Virtual exploration was stopped at the lobar bronchi in one case only; the origin of segmental bronchi was depicted in 23 cases and that of some subsegmental branches in 2 cases. Agreement between actual and virtual bronchoscopic findings was good in all cases but 3 where it was nevertheless considered satisfactory. The yield of clinically useful information differed in 8/24 cases: virtual reconstructions provided more information than bronchoscopy in 5 cases and vice versa in 3. Virtual reconstructions are limited in that the procedure is long and difficult and needing a strictly standardized threshold value not to alter virtual findings. Moreover, the reconstructed surface lacks transparency, there is the partial volume effect and the branches < or = 4 pixels phi and/or meandering ones are difficult to explore. Our preliminary data are encouraging. Segmental bronchi were depicted in nearly all cases, except for the branches involved by disease. Obstructing lesions could be bypassed in some cases

  9. Virtual endoscopy in neurosurgery: a review.

    PubMed

    Neubauer, André; Wolfsberger, Stefan

    2013-01-01

    Virtual endoscopy is the computerized creation of images depicting the inside of patient anatomy reconstructed in a virtual reality environment. It permits interactive, noninvasive, 3-dimensional visual inspection of anatomical cavities or vessels. This can aid in diagnostics, potentially replacing an actual endoscopic procedure, and help in the preparation of a surgical intervention by bridging the gap between plain 2-dimensional radiologic images and the 3-dimensional depiction of anatomy during actual endoscopy. If not only the endoscopic vision but also endoscopic handling, including realistic haptic feedback, is simulated, virtual endoscopy can be an effective training tool for novice surgeons. In neurosurgery, the main fields of the application of virtual endoscopy are third ventriculostomy, endonasal surgery, and the evaluation of pathologies in cerebral blood vessels. Progress in this very active field of research is achieved through cooperation between the technical and the medical communities. While the technology advances and new methods for modeling, reconstruction, and simulation are being developed, clinicians evaluate existing simulators, steer the development of new ones, and explore new fields of application. This review introduces some of the most interesting virtual reality systems for endoscopic neurosurgery developed in recent years and presents clinical studies conducted either on areas of application or specific systems. In addition, benefits and limitations of single products and simulated neuroendoscopy in general are pointed out.

  10. Analysis towards VMEM File of a Suspended Virtual Machine

    NASA Astrophysics Data System (ADS)

    Song, Zheng; Jin, Bo; Sun, Yongqing

    With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.

  11. Detecting early egocentric and allocentric impairments deficits in Alzheimer's disease: an experimental study with virtual reality.

    PubMed

    Serino, Silvia; Morganti, Francesca; Di Stefano, Fabio; Riva, Giuseppe

    2015-01-01

    Several studies have pointed out that egocentric and allocentric spatial impairments are one of the earliest manifestations of Alzheimer's Disease (AD). It is less clear how a break in the continuous interaction between these two representations may be a crucial marker to detect patients who are at risk to develop dementia. The main objective of this study is to compare the performances of participants suffering from amnestic mild cognitive impairment (aMCI group), patients with AD (AD group) and a control group (CG), using a virtual reality (VR)-based procedure for assessing the abilities in encoding, storing and syncing different spatial representations. In the first task, participants were required to indicate on a real map the position of the object they had memorized, while in the second task they were invited to retrieve its position from an empty version of the same virtual room, starting from a different position. The entire procedure was repeated across three different trials, depending on the object location in the encoding phase. Our finding showed that aMCI patients performed significantly more poorly in the third trial of the first task, showing a deficit in the ability to encode and store an allocentric viewpoint independent representation. On the other hand, AD patients performed significantly more poorly when compared to the CG in the second task, indicating a specific impairment in storing an allocentric viewpoint independent representation and then syncing it with the allocentric viewpoint dependent representation. Furthermore, data suggested that these impairments are not a product of generalized cognitive decline or of general decay in spatial abilities, but instead may reflect a selective deficit in the spatial organization Overall, these findings provide an initial insight into the cognitive underpinnings of amnestic impairment in aMCI and AD patient exploiting the potentiality of VR.

  12. Should Live Patient Licensing Examinations in Dentistry Be Discontinued? Two Viewpoints: Viewpoint 1: Alternative Assessment Models Are Not Yet Viable Replacements for Live Patients in Clinical Licensure Exams and Viewpoint 2: Ethical and Patient Care Concerns About Live Patient Exams Require Full Acceptance of Justifiable Alternatives.

    PubMed

    Chu, Tien-Min Gabriel; Makhoul, Nicholas M; Silva, Daniela Rodrigues; Gonzales, Theresa S; Letra, Ariadne; Mays, Keith A

    2018-03-01

    This Point/Counterpoint article addresses a long-standing but still-unresolved debate on the advantages and disadvantages of using live patients in dental licensure exams. Two contrasting viewpoints are presented. Viewpoint 1 supports the traditional use of live patients, arguing that other assessment models have not yet been demonstrated to be viable alternatives to the actual treatment of patients in the clinical licensure process. This viewpoint also contends that the use of live patients and inherent variances in live patient treatment represent the realities of daily private practice. Viewpoint 2 argues that the use of live patients in licensure exams needs to be discontinued considering those exams' ethical dilemmas of exposing patients to potential harm, as well as their lack of reliability and validity and limited scope. According to this viewpoint, the current presence of viable alternatives means that the risk of harm inherent in live patient exams can finally be eliminated and those exams replaced with other means to confirm that candidates are qualified for licensure to practice.

  13. The Life Cycle of Images: Revisiting the Ethical Treatment of the Art Therapy Image

    ERIC Educational Resources Information Center

    Hinz, Lisa D.

    2013-01-01

    Using the metaphor of the human life cycle, the author of this viewpoint suggests that consideration of the birth, life, and death of images made in art therapy may promote a new perspective on their ethical treatment. A developmental view of images encourages art therapists to see art images as living entities that undergo a natural life cycle.…

  14. Library Searching: An Industrial User's Viewpoint.

    ERIC Educational Resources Information Center

    Hendrickson, W. A.

    1982-01-01

    Discusses library searching of chemical literature from an industrial user's viewpoint, focusing on differences between academic and industrial researcher's searching techniques of the same problem area. Indicates that industry users need more exposure to patents, work with abstracting services and continued improvement in computer searching…

  15. Automated flight path planning for virtual endoscopy.

    PubMed

    Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S

    1998-05-01

    In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.

  16. Statistical virtual eye model based on wavefront aberration

    PubMed Central

    Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie

    2012-01-01

    Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112

  17. Agreement and reliability of pelvic floor measurements during rest and on maximum Valsalva maneuver using three-dimensional translabial ultrasound and virtual reality imaging.

    PubMed

    Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B

    2016-08-01

    Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by

  18. Improving Student Achievement and Teacher Effectiveness through Scientifically Based Practices. NCREL Viewpoints, Number 11

    ERIC Educational Resources Information Center

    Schuch, Linda, Ed.

    2004-01-01

    "Viewpoints" is a multimedia package containing two audio CDs and a short, informative booklet. This volume of "Viewpoints" focuses on using scientifically based practices to improve student achievement and teacher effectiveness. The audio CDs provide the voices, or viewpoints, of various leaders from the education field who have worked closely…

  19. 75 FR 27119 - ViewPoint Financial Group, Inc., Plano, Texas; Approval of Conversion Application

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-13

    ... DEPARTMENT OF THE TREASURY Office of Thrift Supervision [AC-37: OTS No. H-47111] ViewPoint Financial Group, Inc., Plano, Texas; Approval of Conversion Application Notice is hereby given that on May 6, 2010, the Office of Thrift Supervision approved the application of ViewPoint MHC and ViewPoint Bank...

  20. OPTIMIZATION OF VIRTUAL FRISCH-GRID CdZnTe DETECTOR DESIGNS FOR IMAGING AND SPECTROSCOPY OF GAMMA RAYS.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BOLOTNIKOV,A.E.; ABDUL-JABBAR, N.M.; BABALOLA, S.

    2007-08-21

    In the past, various virtual Frisch-grid designs have been proposed for cadmium zinc telluride (CZT) and other compound semiconductor detectors. These include three-terminal, semi-spherical, CAPture, Frisch-ring, capacitive Frisch-grid and pixel devices (along with their modifications). Among them, the Frisch-grid design employing a non-contacting ring extended over the entire side surfaces of parallelepiped-shaped CZT crystals is the most promising. The defect-free parallelepiped-shaped crystals with typical dimensions of 5x5{approx}12 mm3 are easy to produce and can be arranged into large arrays used for imaging and gamma-ray spectroscopy. In this paper, we report on further advances of the virtual Frisch-grid detector design formore » the parallelepiped-shaped CZT crystals. Both the experimental testing and modeling results are described.« less

  1. The National Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.

    2001-06-01

    The National Virtual Observatory is a distributed computational facility that will provide access to the ``virtual sky''-the federation of astronomical data archives, object catalogs, and associated information services. The NVO's ``virtual telescope'' is a common framework for requesting, retrieving, and manipulating information from diverse, distributed resources. The NVO will make it possible to seamlessly integrate data from the new all-sky surveys, enabling cross-correlations between multi-Terabyte catalogs and providing transparent access to the underlying image or spectral data. Success requires high performance computational systems, high bandwidth network services, agreed upon standards for the exchange of metadata, and collaboration among astronomers, astronomical data and information service providers, information technology specialists, funding agencies, and industry. International cooperation at the onset will help to assure that the NVO simultaneously becomes a global facility. .

  2. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  3. A virtual fluoroscopy system to verify seed positioning accuracy during prostate permanent seed implants.

    PubMed

    Sarkar, V; Gutierrez, A N; Stathakis, S; Swanson, G P; Papanikolaou, N

    2009-01-01

    The purpose of this project was to develop a software platform to produce a virtual fluoroscopic image as an aid for permanent prostate seed implants. Seed location information from a pre-plan was extracted and used as input to in-house developed software to produce a virtual fluoroscopic image. In order to account for differences in patient positioning on the day of treatment, the user was given the ability to make changes to the virtual image. The system has been shown to work as expected for all test cases. The system allows for quick (on average less than 10 sec) generation of a virtual fluoroscopic image of the planned seed pattern. The image can be used as a verification tool to aid the physician in evaluating how close the implant is to the planned distribution throughout the procedure and enable remedial action should a large deviation be observed.

  4. Distance Perception of Stereoscopically Presented Virtual Objects Optically Superimposed on Physical Objects by a Head-Mounted See-Through Display

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R.; Bucher, Urs J.; Statler, Irving C. (Technical Monitor)

    1994-01-01

    The influence of physically presented background stimuli on the perceived depth of optically overlaid, stereoscopic virtual images has been studied using headmounted stereoscopic, virtual image displays. These displays allow presentation of physically unrealizable stimulus combinations. Positioning of an opaque physical object either at the initial perceived depth of the virtual image or at a position substantially in front of the virtual image, causes the virtual image to perceptually move closer to the observer. In the case of objects positioned substantially in front of the virtual image, subjects often perceive the opaque object to become transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not due to occlusion cues. According, it may have an alternative cause such as variation in the binocular vengeance position of the eyes caused by introduction of the physical object. This effect may complicate design of overlaid virtual image displays for near objects and appears to be related to the relative conspicuousness of the overlaid virtual image and the background. Consequently, it may be related to earlier analyses of John Foley which modeled open-loop pointing errors to stereoscopically presented points of light in terms of errors in determination of a reference point for interpretation of observed retinal disparities. Implications for the design of see-through displays for manufacturing will be discussed.

  5. High-extinction virtually imaged phased array-based Brillouin spectroscopy of turbid biological media

    NASA Astrophysics Data System (ADS)

    Fiore, Antonio; Zhang, Jitao; Shao, Peng; Yun, Seok Hyun; Scarcelli, Giuliano

    2016-05-01

    Brillouin microscopy has recently emerged as a powerful technique to characterize the mechanical properties of biological tissue, cell, and biomaterials. However, the potential of Brillouin microscopy is currently limited to transparent samples, because Brillouin spectrometers do not have sufficient spectral extinction to reject the predominant non-Brillouin scattered light of turbid media. To overcome this issue, we combined a multi-pass Fabry-Perot interferometer with a two-stage virtually imaged phased array spectrometer. The Fabry-Perot etalon acts as an ultra-narrow band-pass filter for Brillouin light with high spectral extinction and low loss. We report background-free Brillouin spectra from Intralipid solutions and up to 100 μm deep within chicken muscle tissue.

  6. Dynamic mapping of brain and cognitive control of virtual gameplay (study by functional magnetic resonance imaging).

    PubMed

    Rezakova, M V; Mazhirina, K G; Pokrovskiy, M A; Savelov, A A; Savelova, O A; Shtark, M B

    2013-04-01

    Using functional magnetic resonance imaging technique, we performed online brain mapping of gamers, practiced to voluntary (cognitively) control their heart rate, the parameter that operated a competitive virtual gameplay in the adaptive feedback loop. With the default start picture, the regions of interest during the formation of optimal cognitive strategy were as follows: Brodmann areas 19, 37, 39 and 40, i.e. cerebellar structures (vermis, amygdala, pyramids, clivus). "Localization" concept of the contribution of the cerebellum to cognitive processes is discussed.

  7. Detecting early egocentric and allocentric impairments deficits in Alzheimer’s disease: an experimental study with virtual reality

    PubMed Central

    Serino, Silvia; Morganti, Francesca; Di Stefano, Fabio; Riva, Giuseppe

    2015-01-01

    Several studies have pointed out that egocentric and allocentric spatial impairments are one of the earliest manifestations of Alzheimer’s Disease (AD). It is less clear how a break in the continuous interaction between these two representations may be a crucial marker to detect patients who are at risk to develop dementia. The main objective of this study is to compare the performances of participants suffering from amnestic mild cognitive impairment (aMCI group), patients with AD (AD group) and a control group (CG), using a virtual reality (VR)-based procedure for assessing the abilities in encoding, storing and syncing different spatial representations. In the first task, participants were required to indicate on a real map the position of the object they had memorized, while in the second task they were invited to retrieve its position from an empty version of the same virtual room, starting from a different position. The entire procedure was repeated across three different trials, depending on the object location in the encoding phase. Our finding showed that aMCI patients performed significantly more poorly in the third trial of the first task, showing a deficit in the ability to encode and store an allocentric viewpoint independent representation. On the other hand, AD patients performed significantly more poorly when compared to the CG in the second task, indicating a specific impairment in storing an allocentric viewpoint independent representation and then syncing it with the allocentric viewpoint dependent representation. Furthermore, data suggested that these impairments are not a product of generalized cognitive decline or of general decay in spatial abilities, but instead may reflect a selective deficit in the spatial organization Overall, these findings provide an initial insight into the cognitive underpinnings of amnestic impairment in aMCI and AD patient exploiting the potentiality of VR. PMID:26042034

  8. Virtual Raters for Reproducible and Objective Assessments in Radiology

    NASA Astrophysics Data System (ADS)

    Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin

    2016-04-01

    Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.

  9. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics.

    PubMed

    Ding, Yichen; Abiri, Arash; Abiri, Parinaz; Li, Shuoran; Chang, Chih-Chiang; Baek, Kyung In; Hsu, Jeffrey J; Sideris, Elias; Li, Yilei; Lee, Juhyun; Segura, Tatiana; Nguyen, Thao P; Bui, Alexander; Sevag Packard, René R; Fei, Peng; Hsiai, Tzung K

    2017-11-16

    Currently, there is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3D architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3D and 4D (3D spatial + 1D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods, such as routine optical microscopes. We hereby demonstrate multiscale applicability of VR-LSFM to (a) interrogate skin fibroblasts interacting with a hyaluronic acid-based hydrogel, (b) navigate through the endocardial trabecular network during zebrafish development, and (c) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation algorithm with deformable image registration to interface a VR environment with imaging computation for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution.

  10. Integrating light-sheet imaging with virtual reality to recapitulate developmental cardiac mechanics

    PubMed Central

    Ding, Yichen; Abiri, Arash; Abiri, Parinaz; Li, Shuoran; Chang, Chih-Chiang; Hsu, Jeffrey J.; Sideris, Elias; Li, Yilei; Lee, Juhyun; Segura, Tatiana; Nguyen, Thao P.; Bui, Alexander; Sevag Packard, René R.; Hsiai, Tzung K.

    2017-01-01

    Currently, there is a limited ability to interactively study developmental cardiac mechanics and physiology. We therefore combined light-sheet fluorescence microscopy (LSFM) with virtual reality (VR) to provide a hybrid platform for 3D architecture and time-dependent cardiac contractile function characterization. By taking advantage of the rapid acquisition, high axial resolution, low phototoxicity, and high fidelity in 3D and 4D (3D spatial + 1D time or spectra), this VR-LSFM hybrid methodology enables interactive visualization and quantification otherwise not available by conventional methods, such as routine optical microscopes. We hereby demonstrate multiscale applicability of VR-LSFM to (a) interrogate skin fibroblasts interacting with a hyaluronic acid–based hydrogel, (b) navigate through the endocardial trabecular network during zebrafish development, and (c) localize gene therapy-mediated potassium channel expression in adult murine hearts. We further combined our batch intensity normalized segmentation algorithm with deformable image registration to interface a VR environment with imaging computation for the analysis of cardiac contraction. Thus, the VR-LSFM hybrid platform demonstrates an efficient and robust framework for creating a user-directed microenvironment in which we uncovered developmental cardiac mechanics and physiology with high spatiotemporal resolution. PMID:29202458

  11. An approach to defect inspection for packing presswork with virtual orientation points and threshold template image

    NASA Astrophysics Data System (ADS)

    Hao, Xiangyang; Liu, Songlin; Zhao, Fulai; Jiang, Lixing

    2015-05-01

    The packing presswork is an important factor of industrial product, especially for the luxury commodities such as cigarettes. In order to ensure the packing presswork to be qualified, the products should be inspected and unqualified one be picked out piece by piece with the vision-based inspection method, which has such advantages as no-touch inspection, high efficiency and automation. Vision-based inspection of packing presswork mainly consists of steps as image acquisition, image registration and defect inspection. The registration between inspected image and reference image is the foundation and premise of visual inspection. In order to realize rapid, reliable and accurate image registration, a registration method based on virtual orientation points is put forward. The precision of registration between inspected image and reference image can reach to sub pixels. Since defect is without fixed position, shape, size and color, three measures are taken to improve the inspection effect. Firstly, the concept of threshold template image is put forward to resolve the problem of variable threshold of intensity difference. Secondly, the color difference is calculated by comparing each pixel with the adjacent pixels of its correspondence on reference image to avoid false defect resulted from color registration error. Thirdly, the strategy of image pyramid is applied in the inspection algorithm to enhance the inspection efficiency. Experiments show that the related algorithm is effective to defect inspection and it takes 27.4 ms on average to inspect a piece of cigarette packing presswork.

  12. Virtual reality in rhinology-a new dimension of clinical experience.

    PubMed

    Klapan, Ivica; Raos, Pero; Galeta, Tomislav; Kubat, Goranka

    2016-07-01

    There is often a need to more precisely identify the extent of pathology and the fine elements of intracranial anatomic features during the diagnostic process and during many operations in the nose, sinus, orbit, and skull base region. In two case reports, we describe the methods used in the diagnostic workup and surgical therapy in the nose and paranasal sinus region. Besides baseline x-ray, multislice computed tomography, and magnetic resonance imaging, operative field imaging was performed via a rapid prototyping model, virtual endoscopy, and 3-D imaging. Different head tissues were visualized in different colors, showing their anatomic interrelations and the extent of pathologic tissue within the operative field. This approach has not yet been used as a standard preoperative or intraoperative procedure in otorhinolaryngology. In this way, we tried to understand the new, visualized "world of anatomic relations within the patient's head" by creating an impression of perception (virtual perception) of the given position of all elements in a particular anatomic region of the head, which does not exist in the real world (virtual world). This approach was aimed at upgrading the diagnostic workup and surgical therapy by ensuring a faster, safer and, above all, simpler operative procedure. In conclusion, any ENT specialist can provide virtual reality support in implementing surgical procedures, with additional control of risks and within the limits of normal tissue, without additional trauma to the surrounding tissue in the anatomic region. At the same time, the virtual reality support provides an impression of the virtual world as the specialist navigates through it and manipulates virtual objects.

  13. New approaches to virtual environment surgery

    NASA Technical Reports Server (NTRS)

    Ross, M. D.; Twombly, A.; Lee, A. W.; Cheng, R.; Senger, S.

    1999-01-01

    This research focused on two main problems: 1) low cost, high fidelity stereoscopic imaging of complex tissues and organs; and 2) virtual cutting of tissue. A further objective was to develop these images and virtual tissue cutting methods for use in a telemedicine project that would connect remote sites using the Next Generation Internet. For goal one we used a CT scan of a human heart, a desktop PC with an OpenGL graphics accelerator card, and LCD stereoscopic glasses. Use of multiresolution meshes ranging from approximately 1,000,000 to 20,000 polygons speeded interactive rendering rates enormously while retaining general topography of the dataset. For goal two, we used a CT scan of an infant skull with premature closure of the right coronal suture, a Silicon Graphics Onyx workstation, a Fakespace Immersive WorkBench and CrystalEyes LCD glasses. The high fidelity mesh of the skull was reduced from one million to 50,000 polygons. The cut path was automatically calculated as the shortest distance along the mesh between a small number of hand selected vertices. The region outlined by the cut path was then separated from the skull and translated/rotated to assume a new position. The results indicate that widespread high fidelity imaging in virtual environment is possible using ordinary PC capabilities if appropriate mesh reduction methods are employed. The software cutting tool is applicable to heart and other organs for surgery planning, for training surgeons in a virtual environment, and for telemedicine purposes.

  14. Randomized Clinical Trial of Virtual Reality Simulation Training for Transvaginal Gynecologic Ultrasound Skills.

    PubMed

    Chao, Coline; Chalouhi, Gihad E; Bouhanna, Philippe; Ville, Yves; Dommergues, Marc

    2015-09-01

    To compare the impact of virtual reality simulation training and theoretical teaching on the ability of inexperienced trainees to produce adequate virtual transvaginal ultrasound images. We conducted a randomized controlled trial with parallel groups. Participants included inexperienced residents starting a training program in Paris. The intervention consisted of 40 minutes of virtual reality simulation training using a haptic transvaginal simulator versus 40 minutes of conventional teaching including a conference with slides and videos and answers to the students' questions. The outcome was a 19-point image quality score calculated from a set of 4 images (sagittal and coronal views of the uterus and left and right ovaries) produced by trainees immediately after the intervention, using the same simulator on which a new virtual patient had been uploaded. Experts assessed the outcome on stored images, presented in a random order, 2 months after the trial was completed. They were blinded to group assignment. The hypothesis was an improved outcome in the intervention group. Randomization was 1 to 1. The mean score was significantly greater in the simulation group (n = 16; mean score, 12; SEM, 0.8) than the control group (n = 18; mean score, 9; SEM, 1.0; P= .0302). The quality of virtual vaginal images produced by inexperienced trainees was greater immediately after a single virtual reality simulation training session than after a single theoretical teaching session. © 2015 by the American Institute of Ultrasound in Medicine.

  15. Testing the reliability of hands and ears as biometrics: the importance of viewpoint.

    PubMed

    Stevenage, Sarah V; Walpole, Catherine; Neil, Greg J; Black, Sue M

    2015-11-01

    Two experiments are presented to explore the limits when matching a sample to a suspect utilising the hand as a novel biometric. The results of Experiment 1 revealed that novice participants were able to match hands at above-chance levels as viewpoint changed. Notably, a moderate change in viewpoint had no notable effect, but a more substantial change in viewpoint affected performance significantly. Importantly, the impact of viewpoint when matching hands was smaller than that when matching ears in a control condition. This was consistent with the suggestion that the flexibility of the hand may have minimised the negative impact of a sub-optimal view. The results of Experiment 2 confirmed that training via a 10-min expert video was sufficient to reduce the impact of viewpoint in the most difficult case but not to remove it entirely. The implications of these results were discussed in terms of the theoretical importance of function when considering the canonical view and in terms of the applied value of the hand as a reliable biometric across viewing conditions.

  16. Virtual Reality Exploration and Planning for Precision Colorectal Surgery.

    PubMed

    Guerriero, Ludovica; Quero, Giuseppe; Diana, Michele; Soler, Luc; Agnus, Vincent; Marescaux, Jacques; Corcione, Francesco

    2018-06-01

    Medical software can build a digital clone of the patient with 3-dimensional reconstruction of Digital Imaging and Communication in Medicine images. The virtual clone can be manipulated (rotations, zooms, etc), and the various organs can be selectively displayed or hidden to facilitate a virtual reality preoperative surgical exploration and planning. We present preliminary cases showing the potential interest of virtual reality in colorectal surgery for both cases of diverticular disease and colonic neoplasms. This was a single-center feasibility study. The study was conducted at a tertiary care institution. Two patients underwent a laparoscopic left hemicolectomy for diverticular disease, and 1 patient underwent a laparoscopic right hemicolectomy for cancer. The 3-dimensional virtual models were obtained from preoperative CT scans. The virtual model was used to perform preoperative exploration and planning. Intraoperatively, one of the surgeons was manipulating the virtual reality model, using the touch screen of a tablet, which was interactively displayed to the surgical team. The main outcome was evaluation of the precision of virtual reality in colorectal surgery planning and exploration. In 1 patient undergoing laparoscopic left hemicolectomy, an abnormal origin of the left colic artery beginning as an extremely short common trunk from the inferior mesenteric artery was clearly seen in the virtual reality model. This finding was missed by the radiologist on CT scan. The precise identification of this vascular variant granted a safe and adequate surgery. In the remaining cases, the virtual reality model helped to precisely estimate the vascular anatomy, providing key landmarks for a safer dissection. A larger sample size would be necessary to definitively assess the efficacy of virtual reality in colorectal surgery. Virtual reality can provide an enhanced understanding of crucial anatomical details, both preoperatively and intraoperatively, which could

  17. D Point Cloud Model Colorization by Dense Registration of Digital Images

    NASA Astrophysics Data System (ADS)

    Crombez, N.; Caron, G.; Mouaddib, E.

    2015-02-01

    Architectural heritage is a historic and artistic property which has to be protected, preserved, restored and must be shown to the public. Modern tools like 3D laser scanners are more and more used in heritage documentation. Most of the time, the 3D laser scanner is completed by a digital camera which is used to enrich the accurate geometric informations with the scanned objects colors. However, the photometric quality of the acquired point clouds is generally rather low because of several problems presented below. We propose an accurate method for registering digital images acquired from any viewpoints on point clouds which is a crucial step for a good colorization by colors projection. We express this image-to-geometry registration as a pose estimation problem. The camera pose is computed using the entire images intensities under a photometric visual and virtual servoing (VVS) framework. The camera extrinsic and intrinsic parameters are automatically estimated. Because we estimates the intrinsic parameters we do not need any informations about the camera which took the used digital image. Finally, when the point cloud model and the digital image are correctly registered, we project the 3D model in the digital image frame and assign new colors to the visible points. The performance of the approach is proven in simulation and real experiments on indoor and outdoor datasets of the cathedral of Amiens, which highlight the success of our method, leading to point clouds with better photometric quality and resolution.

  18. Reach for Reference. No Opposition Here! Opposing Viewpoints Resource Center Is a Very Good Database

    ERIC Educational Resources Information Center

    Safford, Barbara Ripp

    2004-01-01

    "Opposing Viewpoints" and "Opposing Viewpoints Juniors" have long been standard titles in upper elementary, middle level, and high school collections. "Opposing Viewpoints Juniors" should be required as information literacy/critical thinking curriculum tools as early as fifth grade as they use current controversies to teach students how to…

  19. Comparison of virtual unenhanced CT images of the abdomen under different iodine flow rates.

    PubMed

    Li, Yongrui; Li, Ye; Jackson, Alan; Li, Xiaodong; Huang, Ning; Guo, Chunjie; Zhang, Huimao

    2017-01-01

    To assess the effect of varying iodine flow rate (IFR) and iodine concentration on the quality of virtual unenhanced (VUE) images of the abdomen obtained with dual-energy CT. 94 subjects underwent unenhanced and triphasic contrast-enhanced CT scan of the abdomen, including arterial phase, portal venous phase, and delayed phase using dual-energy CT. Patients were randomized into 4 groups with different IFRs or iodine concentrations. VUE images were generated at 70 keV. The CT values, image noise, SNR and CNR of aorta, portal vein, liver, liver lesion, pancreatic parenchyma, spleen, erector spinae, and retroperitoneal fat were recorded. Dose-length product and effective dose for an examination with and without plain phase scan were calculated to assess the potential dose savings. Two radiologists independently assessed subjective image quality using a five-point scale. The Kolmogorov-Smirnov test was used first to test for normal distribution. Where data conformed to a normal distribution, analysis of variance was used to compare mean HU values, image noise, SNRs and CNRs for the 4 image sets. Where data distribution was not normal, a nonparametric test (Kruskal-Wallis test followed by stepwise step-down comparisons) was used. The significance level for all tests was 0.01 (two-sided) to allow for type 2 errors due to multiple testing. The CT numbers (HU) of VUE images showed no significant differences between the 4 groups (p > 0.05) or between different phases within the same group (p > 0.05). VUE images had equal or higher SNR and CNR than true unenhanced images. VUE images received equal or lower subjective image quality scores than unenhanced images but were of acceptable quality for diagnostic use. Calculated dose-length product and estimated dose showed that the use of VUE images in place of unenhanced images would be associated with a dose saving of 25%. VUE images can replace conventional unenhanced images. VUE images are not affected by varying iodine

  20. Should PGY-1 Be Mandatory in Dental Education? Two Viewpoints: Viewpoint 1: PGY-1 Provides Benefits That Support Making It Mandatory and Viewpoint 2: PGY-1 Should Be Available for Dental Graduates But Not Mandatory.

    PubMed

    Dhar, Vineet; Glascoe, Alison; Esfandiari, Shahrokh; Williams, Kelly B; McQuistan, Michelle R; Stevens, Mark R

    2016-11-01

    This Point/Counterpoint considers whether a general dentistry postgraduate year one (PGY-1) residency should be required for all new graduates who do not pursue specialty training. Currently, New York and Delaware require PGY-1 for dental licensure, while other states offer it as an alternative to a clinical examination for obtaining licensure. Viewpoint 1 supports the position that PGY-1 should be mandatory by presenting evidence that PGY-1 residencies fulfill new graduates' need for additional clinical training, enhance their professionalism and practice management skills, and improve access to care. The authors also discuss two barriers-the limited number of postdoctoral positions and the high cost-and suggest ways to overcome them. In contrast, Viewpoint 2 opposes mandatory PGY-1 training. While these authors consider the same core concepts as Viewpoint 1 (education and access to care), they present alternative methods for addressing perceived educational shortcomings in predoctoral curricula. They also examine the competing needs of underserved populations and residents and the resulting impact on access to care, and they discuss the potential conflict of interest associated with asking PGY-1 program directors to assess their residents' competence for licensure.

  1. VRUSE--a computerised diagnostic tool: for usability evaluation of virtual/synthetic environment systems.

    PubMed

    Kalawsky, R S

    1999-02-01

    A special questionnaire (VRUSE) has been designed to measure the usability of a VR system according to the attitude and perception of its users. Important aspects of VR systems were carefully derived to produce key usability factors for the questionnaire. Unlike questionnaires designed for generic interfaces VRUSE is specifically designed to cater for evaluating virtual environments, being a diagnostic tool providing a wealth of information about a user's viewpoint of the interface. VRUSE can be used to great effect with other evaluation techniques to pinpoint problematical areas of a VR interface. Other applications include bench-marking of competitor VR systems.

  2. Generating Contextual Descriptions of Virtual Reality (VR) Spaces

    NASA Astrophysics Data System (ADS)

    Olson, D. M.; Zaman, C. H.; Sutherland, A.

    2017-12-01

    Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.

  3. Highly Sophisticated Virtual Laboratory Instruments in Education

    NASA Astrophysics Data System (ADS)

    Gaskins, T.

    2006-12-01

    Many areas of Science have advanced or stalled according to the ability to see what can not normally be seen. Visual understanding has been key to many of the world's greatest breakthroughs, such as discovery of DNAs double helix. Scientists use sophisticated instruments to see what the human eye can not. Light microscopes, scanning electron microscopes (SEM), spectrometers and atomic force microscopes are employed to examine and learn the details of the extremely minute. It's rare that students prior to university have access to such instruments, or are granted full ability to probe and magnify as desired. Virtual Lab, by providing highly authentic software instruments and comprehensive imagery of real specimens, provides them this opportunity. Virtual Lab's instruments let explorers operate virtual devices on a personal computer to examine real specimens. Exhaustive sets of images systematically and robotically photographed at thousands of positions and multiple magnifications and focal points allow students to zoom in and focus on the most minute detail of each specimen. Controls on each Virtual Lab device interactively and smoothly move the viewer through these images to display the specimen as the instrument saw it. Users control position, magnification, focal length, filters and other parameters. Energy dispersion spectrometry is combined with SEM imagery to enable exploration of chemical composition at minute scale and arbitrary location. Annotation capabilities allow scientists, teachers and students to indicate important features or areas. Virtual Lab is a joint project of NASA and the Beckman Institute at the University of Illinois at Urbana- Champaign. Four instruments currently compose the Virtual Lab suite: A scanning electron microscope and companion energy dispersion spectrometer, a high-power light microscope, and a scanning probe microscope that captures surface properties to the level of atoms. Descriptions of instrument operating principles and

  4. Move to learn: Integrating spatial information from multiple viewpoints.

    PubMed

    Holmes, Corinne A; Newcombe, Nora S; Shipley, Thomas F

    2018-05-11

    Recalling a spatial layout from multiple orientations - spatial flexibility - is challenging, even when the global configuration can be viewed from a single vantage point, but more so when it must be viewed piecemeal. In the current study, we examined whether experiencing the transition between multiple viewpoints enhances spatial memory and flexible recall for a spatial configuration viewed simultaneously (Exp. 1) and sequentially (Exp. 2), whether the type of transition matters, and whether action provides an additional advantage over passive experience. In Experiment 1, participants viewed an array of dollhouse furniture from four viewpoints, but with all furniture simultaneously visible. In Experiment 2, participants viewed the same array piecemeal, from four partitioned viewpoints that allowed for viewing only a segment at a time. The transition between viewpoints involved rotation of the array or participant movement around it. Rotation and participant movement were passively experienced or actively generated. The control condition presented the dollhouse as a series of static views. Across both experiments, participant movement significantly enhanced spatial memory relative to array rotation or static views. However, in Exp. 2, there was a further advantage for actively walking around the array compared to being passively pushed. These findings suggest that movement around a stable environment is key to spatial memory and flexible recall, with action providing an additional boost to the integration of temporally segmented spatial events. Thus, spatial memory may be more flexible than prior data indicate, when studied under more natural acquisition conditions. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Virtual arthroscopy of the visible human female temporomandibular joint.

    PubMed

    Ishimaru, T; Lew, D; Haller, J; Vannier, M W

    1999-07-01

    This study was designed to obtain views of the temporomandibular joint (TMJ) by means of computed arthroscopic simulation (virtual arthroscopy) using three-dimensional (3D) processing. Volume renderings of the TMJ from very thin cryosection slices of the Visible Human Female were taken off the Internet. Analyze(AVW) software (Biomedical Imaging Resource, Mayo Foundation, Rochester, MN) on a Silicon Graphics 02 workstation (Mountain View, CA) was then used to obtain 3D images and allow the navigation "fly-through" of the simulated joint. Good virtual arthroscopic views of the upper and lower joint spaces of both TMJs were obtained by fly-through simulation from the lateral and endaural sides. It was possible to observe the presence of a partial defect in the articular disc and an osteophyte on the condyle. Virtual arthroscopy provided visualization of regions not accessible to real arthroscopy. These results indicate that virtual arthroscopy will be a new technique to investigate the TMJ of the patient with TMJ disorders in the near future.

  6. Welfare Impact of Virtual Trading on Wholesale Electricity Markets

    NASA Astrophysics Data System (ADS)

    Giraldo, Juan S.

    Virtual bidding has become a standard feature of multi-settlement wholesale electricity markets in the United States. Virtual bids are financial instruments that allow market participants to take financial positions in the Day-Ahead (DA) market that are automatically reversed/closed in the Real-Time (RT) market. Most U.S. wholesale electricity markets only have two types of virtual bids: a decrement bid (DEC), which is virtual load, and an increment offer (INC), which is virtual generation. In theory, financial participants create benefits by seeking out profitable bidding opportunities through arbitrage or speculation. Benefits have been argued to take the form of increased competition, price convergence, increased market liquidity, and a more efficient dispatch of generation resources. Studies have found that price convergence between the DA and RT markets improved following the introduction of virtual bidding into wholesale electricity markets. The improvement in price convergence was taken as evidence that market efficiency had increased and many of the theoretical benefits realized. Persistent price differences between the DA and RT markets have led to calls to further expand virtual bidding as a means to address remaining market inefficiencies. However, the argument that price convergence is beneficial is extrapolated from the study of commodity and financial markets and the role of futures for increasing market efficiency in that context. This viewpoint largely ignores details that differentiate wholesale electricity markets from other commodity markets. This dissertation advances the understanding of virtual bidding by evaluating the impact of virtual bidding based on the standard definition of economic efficiency which is social welfare. In addition, an examination of the impacts of another type of virtual bid, up-to-congestion (UTC) transactions is presented. This virtual product significantly increased virtual bidding activity in the PJM interconnection

  7. Quantitative Comparison of Virtual Monochromatic Images of Dual Energy Computed Tomography Systems: Beam Hardening Artifact Correction and Variance in Computed Tomography Numbers: A Phantom Study.

    PubMed

    Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki

    2018-05-21

    The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.

  8. Dual-energy CT in patients with abdominal malignant lymphoma: impact of noise-optimised virtual monoenergetic imaging on objective and subjective image quality.

    PubMed

    Lenga, L; Czwikla, R; Wichmann, J L; Leithner, D; Albrecht, M H; D'Angelo, T; Arendt, C T; Booz, C; Hammerstingl, R; Vogl, T J; Martin, S S

    2018-06-05

    To investigate the impact of noise-optimised virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with malignant lymphoma at dual-energy computed tomography (DECT) examinations of the abdomen. Thirty-five consecutive patients (mean age, 53.8±18.6 years; range, 21-82 years) with histologically proven malignant lymphoma of the abdomen were included retrospectively. Images were post-processed with standard linear blending (M_0.6), traditional VMI, and VMI+ technique at energy levels ranging from 40 to 100 keV in 10 keV increments. Signal-to-noise (SNR) and contrast-to-noise ratios (CNR) were objectively measured in lymphoma lesions. Image quality, lesion delineation, and image noise were rated subjectively by three blinded observers using five-point Likert scales. Quantitative image quality parameters peaked at 40-keV VMI+ (SNR, 15.77±7.74; CNR, 18.27±8.04) with significant differences compared to standard linearly blended M_0.6 (SNR, 7.96±3.26; CNR, 13.55±3.47) and all traditional VMI series (p<0.001). Qualitative image quality assessment revealed significantly superior ratings for image quality at 60-keV VMI+ (median, 5) in comparison with all other image series (p<0.001). Assessment of lesion delineation showed the highest rating scores for 40-keV VMI+ series (median, 5), while lowest subjective image noise was found for 100-keV VMI+ reconstructions (median, 5). Low-keV VMI+ reconstructions led to improved image quality and lesion delineation of malignant lymphoma lesions compared to standard image reconstruction and traditional VMI at abdominal DECT examinations. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  9. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    NASA Astrophysics Data System (ADS)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  10. Neurofeedback training with virtual reality for inattention and impulsiveness.

    PubMed

    Cho, Baek-Hwan; Kim, Saebyul; Shin, Dong Ik; Lee, Jang Han; Lee, Sang Min; Kim, In Young; Kim, Sun I

    2004-10-01

    In this research, the effectiveness of neurofeedback, along with virtual reality (VR), in reducing the level of inattention and impulsiveness was investigated. Twenty-eight male participants, aged 14-18, with social problems, took part in this study. They were separated into three groups: a control group, a VR group, and a non-VR group. The VR and non-VR groups underwent eight sessions of neurofeedback training over 2 weeks, while the control group just waited during the same period. The VR group used a head-mounted display (HMD) and a head tracker, which let them look around the virtual world. Conversely, the non-VR group used only a computer monitor with a fixed viewpoint. All participants performed a continuous performance task (CPT) before and after the complete training session. The results showed that both the VR and non-VR groups achieved better scores in the CPT after the training session, while the control group showed no significant difference. Compared with the other groups, the VR group presented a tendency to get better results, suggesting that immersive VR is applicable to neurofeedback for the rehabilitation of inattention and impulsiveness.

  11. Should Attendance Be Required in Lecture Classrooms in Dental Education? Two Viewpoints: Viewpoint 1: Attendance in the Lecture Classroom Should Be Required and Viewpoint 2: Attendance Should Not Be Required in the Lecture Classroom.

    PubMed

    Cutler, Christopher W; Parise, Mary; Seminario, Ana Lucia; Mendez, Maria Jose Cervantes; Piskorowski, Wilhelm; Silva, Renato

    2016-12-01

    This Point/Counterpoint discusses the long-argued debate over whether lecture attendance in dental school at the predoctoral level should be required. Current educational practice relies heavily on the delivery of content in a traditional lecture style. Viewpoint 1 asserts that attendance should be required for many reasons, including the positive impact that direct contact of students with faculty members and with each other has on learning outcomes. In lectures, students can more easily focus on subject matter that is often difficult to understand. A counter viewpoint argues that required attendance is not necessary and that student engagement is more important than physical classroom attendance. This viewpoint notes that recent technologies support active learning strategies that better engage student participation, fostering independent learning that is not supported in the traditional large lecture classroom and argues that dental education requires assimilation of complex concepts and applying them to patient care, which passing a test does not ensure. The two positions agree that attendance does not guarantee learning and that, with the surge of information technologies, it is more important than ever to teach students how to learn. At this time, research does not show conclusively if attendance in any type of setting equals improved learning or ability to apply knowledge.

  12. A digital atlas of breast histopathology: an application of web based virtual microscopy

    PubMed Central

    Lundin, M; Lundin, J; Helin, H; Isola, J

    2004-01-01

    Aims: To develop an educationally useful atlas of breast histopathology, using advanced web based virtual microscopy technology. Methods: By using a robotic microscope and software adopted and modified from the aerial and satellite imaging industry, a virtual microscopy system was developed that allows fully automated slide scanning and image distribution via the internet. More than 150 slides were scanned at high resolution with an oil immersion ×40 objective (numerical aperture, 1.3) and archived on an image server residing in a high speed university network. Results: A publicly available website was constructed, http://www.webmicroscope.net/breastatlas, which features a comprehensive virtual slide atlas of breast histopathology according to the World Health Organisation 2003 classification. Users can view any part of an entire specimen at any magnification within a standard web browser. The virtual slides are supplemented with concise textual descriptions, but can also be viewed without diagnostic information for self assessment of histopathology skills. Conclusions: Using the technology described here, it is feasible to develop clinically and educationally useful virtual microscopy applications. Web based virtual microscopy will probably become widely used at all levels in pathology teaching. PMID:15563669

  13. Virtual 3d City Modeling: Techniques and Applications

    NASA Astrophysics Data System (ADS)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  14. Manually locating physical and virtual reality objects.

    PubMed

    Chen, Karen B; Kimmel, Ryan A; Bartholomew, Aaron; Ponto, Kevin; Gleicher, Michael L; Radwin, Robert G

    2014-09-01

    In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance. Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects. Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object. Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets. Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision. Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.

  15. Virtual pyramid wavefront sensor for phase unwrapping.

    PubMed

    Akondi, Vyas; Vohnsen, Brian; Marcos, Susana

    2016-10-10

    Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.

  16. [The virtual reality simulation research of China Mechanical Virtual Human based on the Creator/Vega].

    PubMed

    Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng

    2010-10-01

    The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.

  17. Training software using virtual-reality technology and pre-calculated effective dose data.

    PubMed

    Ding, Aiping; Zhang, Di; Xu, X George

    2009-05-01

    This paper describes the development of a software package, called VR Dose Simulator, which aims to provide interactive radiation safety and ALARA training to radiation workers using virtual-reality (VR) simulations. Combined with a pre-calculated effective dose equivalent (EDE) database, a virtual radiation environment was constructed in VR authoring software, EON Studio, using 3-D models of a real nuclear power plant building. Models of avatars representing two workers were adopted with arms and legs of the avatar being controlled in the software to simulate walking and other postures. Collision detection algorithms were developed for various parts of the 3-D power plant building and avatars to confine the avatars to certain regions of the virtual environment. Ten different camera viewpoints were assigned to conveniently cover the entire virtual scenery in different viewing angles. A user can control the avatar to carry out radiological engineering tasks using two modes of avatar navigation. A user can also specify two types of radiation source: Cs and Co. The location of the avatar inside the virtual environment during the course of the avatar's movement is linked to the EDE database. The accumulative dose is calculated and displayed on the screen in real-time. Based on the final accumulated dose and the completion status of all virtual tasks, a score is given to evaluate the performance of the user. The paper concludes that VR-based simulation technologies are interactive and engaging, thus potentially useful in improving the quality of radiation safety training. The paper also summarizes several challenges: more streamlined data conversion, realistic avatar movement and posture, more intuitive implementation of the data communication between EON Studio and VB.NET, and more versatile utilization of EDE data such as a source near the body, etc., all of which needs to be addressed in future efforts to develop this type of software.

  18. 2016 Federal Employee Viewpoint Survey Results and Analysis

    EPA Pesticide Factsheets

    The Office of Personnel Management’s Federal Employee Viewpoint Survey results are used to gauge the attitudes and perceptions of employees in key work experience areas that drive satisfaction and commitment.

  19. 2017 Federal Employee Viewpoint Survey Results and Analysis

    EPA Pesticide Factsheets

    The Office of Personnel Management’s Federal Employee Viewpoint Survey results are used to gauge the attitudes and perceptions of employees in key work experience areas that drive satisfaction and commitment.

  20. 2015 Federal Employee Viewpoint Survey Results and Analysis

    EPA Pesticide Factsheets

    The Office of Personnel Management’s Federal Employee Viewpoint Survey results are used to gauge the attitudes and perceptions of employees in key work experience areas that drive satisfaction and commitment.

  1. Virtual Reality: You Are There

    NASA Technical Reports Server (NTRS)

    1993-01-01

    Telepresence or "virtual reality," allows a person, with assistance from advanced technology devices, to figuratively project himself into another environment. This technology is marketed by several companies, among them Fakespace, Inc., a former Ames Research Center contractor. Fakespace developed a teleoperational motion platform for transmitting sounds and images from remote locations. The "Molly" matches the user's head motion and, when coupled with a stereo viewing device and appropriate software, creates the telepresence experience. Its companion piece is the BOOM-the user's viewing device that provides the sense of involvement in the virtual environment. Either system may be used alone. Because suits, gloves, headphones, etc. are not needed, a whole range of commercial applications is possible, including computer-aided design techniques and virtual reality visualizations. Customers include Sandia National Laboratories, Stanford Research Institute and Mattel Toys.

  2. Tele Hyper Virtuality

    NASA Technical Reports Server (NTRS)

    Terashima, Nobuyoshi

    1994-01-01

    In the future, remote images sent over communication lines will be reproduced in virtual reality (VR). This form of virtual telecommunications, which will allow observers to engage in an activity as though it were real, is the focus of considerable attention. Taken a step further, real and unreal objects will be placed in a single space to create an extremely realistic environment. Here, imaginary and other life forms as well as people and animals in remote locations will gather via telecommunication lines that create a common environment where life forms can work and interact together. Words, gestures, diagrams and other forms of communication will be used freely in performing work. Actual construction of a system based on this new concept will not only provide people with experiences that would have been impossible in the past, but will also inspire new applications in which people will function in environments where it would have been difficult if not impossible for them to function until now. This paper describes Tele Hyper Virtuality concept, its definition, applications, the key technologies to accomplish it and future prospects.

  3. Extending the Life of Virtual Heritage: Reuse of Tls Point Clouds in Synthetic Stereoscopic Spherical Images

    NASA Astrophysics Data System (ADS)

    Garcia Fernandez, J.; Tammi, K.; Joutsiniemi, A.

    2017-02-01

    Recent advances in Terrestrial Laser Scanner (TLS), in terms of cost and flexibility, have consolidated this technology as an essential tool for the documentation and digitalization of Cultural Heritage. However, once the TLS data is used, it basically remains stored and left to waste.How can highly accurate and dense point clouds (of the built heritage) be processed for its reuse, especially to engage a broader audience? This paper aims to answer this question by a channel that minimizes the need for expert knowledge, while enhancing the interactivity with the as-built digital data: Virtual Heritage Dissemination through the production of VR content. Driven by the ProDigiOUs project's guidelines on data dissemination (EU funded), this paper advances in a production path to transform the point cloud into virtual stereoscopic spherical images, taking into account the different visual features that produce depth perception, and especially those prompting visual fatigue while experiencing the VR content. Finally, we present the results of the Hiedanranta's scans transformed into stereoscopic spherical animations.

  4. Image reconstruction and system modeling techniques for virtual-pinhole PET insert systems

    PubMed Central

    Keesing, Daniel B; Mathews, Aswin; Komarov, Sergey; Wu, Heyu; Song, Tae Yong; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2012-01-01

    Virtual-pinhole PET (VP-PET) imaging is a new technology in which one or more high-resolution detector modules are integrated into a conventional PET scanner with lower-resolution detectors. It can locally enhance the spatial resolution and contrast recovery near the add-on detectors, and depending on the configuration, may also increase the sensitivity of the system. This novel scanner geometry makes the reconstruction problem more challenging compared to the reconstruction of data from a standalone PET scanner, as new techniques are needed to model and account for the non-standard acquisition. In this paper, we present a general framework for fully 3D modeling of an arbitrary VP-PET insert system. The model components are incorporated into a statistical reconstruction algorithm to estimate an image from the multi-resolution data. For validation, we apply the proposed model and reconstruction approach to one of our custom-built VP-PET systems – a half-ring insert device integrated into a clinical PET/CT scanner. Details regarding the most important implementation issues are provided. We show that the proposed data model is consistent with the measured data, and that our approach can lead to reconstructions with improved spatial resolution and lesion detectability. PMID:22490983

  5. Multiple interpretations of a pair of images of a surface

    NASA Astrophysics Data System (ADS)

    Longuet-Higgins, H. C.

    1988-07-01

    It is known that, if two optical images of a visually textured surface, projected from finitely separated viewpoints, allow more than one three-dimensional interpretation, then the surface must be part of a quadric passing through the two viewpoints. It is here shown that this quadric is either a plane or a ruled surface of a type first considered by Maybank (1985) in a study of ambiguous optic flow fields. In the latter case, three is the maximum number of distinct interpretations that the two images can sustain.

  6. Synthetic aperture radar/LANDSAT MSS image registration

    NASA Technical Reports Server (NTRS)

    Maurer, H. E. (Editor); Oberholtzer, J. D. (Editor); Anuta, P. E. (Editor)

    1979-01-01

    Algorithms and procedures necessary to merge aircraft synthetic aperture radar (SAR) and LANDSAT multispectral scanner (MSS) imagery were determined. The design of a SAR/LANDSAT data merging system was developed. Aircraft SAR images were registered to the corresponding LANDSAT MSS scenes and were the subject of experimental investigations. Results indicate that the registration of SAR imagery with LANDSAT MSS imagery is feasible from a technical viewpoint, and useful from an information-content viewpoint.

  7. PVOL: The Planetary Virtual Observatory & Laboratory. An online database of the Outer Planets images.

    NASA Astrophysics Data System (ADS)

    Morgado, A.; Sánchez-Lavega, A.; Rojas, J. F.; Hueso, R.

    2005-08-01

    The collaboration between amateurs astronomers and the professional community has been fruitful on many areas of astronomy. The development of the Internet has allowed a better than ever capability of sharing information worldwide and access to other observers data. For many years now the International Jupiter Watch (IJW) Atmospheric discipline has coordinated observational efforts for long-term studies of the atmosphere of Jupiter. The International Outer Planets Watch (IOPW) has extended its labours to the four Outer Planets. Here we present the Planetary Virtual Observatory & Laboratory (PVOL), a website database where we integer IJW and IOPW images. At PVOL observers can submit their data and professionals can search for images under a wide variety of useful criteria such as date and time, filters used, observer, or central meridian longitude. PVOL is aimed to grow as an organized easy to use database of amateur images of the Outer Planets. The PVOL web address is located at http://www.pvol.ehu.es/ and coexists with the traditional IOPW site: http://www.ehu.es/iopw/ Acknowledgements: This work has been funded by Spanish MCYT PNAYA2003-03216, fondos FEDER and Grupos UPV 15946/2004. R. Hueso acknowledges a post-doc fellowship from Gobierno Vasco.

  8. Virtual surgery in a (tele-)radiology framework.

    PubMed

    Glombitza, G; Evers, H; Hassfeld, S; Engelmann, U; Meinzer, H P

    1999-09-01

    This paper presents telemedicine as an extension of a teleradiology framework through tools for virtual surgery. To classify the described methods and applications, the research field of virtual reality (VR) is broadly reviewed. Differences with respect to technical equipment, methodological requirements and areas of application are pointed out. Desktop VR, augmented reality, and virtual reality are differentiated and discussed in some typical contexts of diagnostic support, surgical planning, therapeutic procedures, simulation and training. Visualization techniques are compared as a prerequisite for virtual reality and assigned to distinct levels of immersion. The advantage of a hybrid visualization kernel is emphasized with respect to the desktop VR applications that are subsequently shown. Moreover, software design aspects are considered by outlining functional openness in the architecture of the host system. Here, a teleradiology workstation was extended by dedicated tools for surgical planning through a plug-in mechanism. Examples of recent areas of application are introduced such as liver tumor resection planning, diagnostic support in heart surgery, and craniofacial surgery planning. In the future, surgical planning systems will become more important. They will benefit from improvements in image acquisition and communication, new image processing approaches, and techniques for data presentation. This will facilitate preoperative planning and intraoperative applications.

  9. Imaging, Virtual Planning, Design, and Production of Patient-Specific Implants and Clinical Validation in Craniomaxillofacial Surgery

    PubMed Central

    Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-M

    2012-01-01

    The purpose of this article was to describe the workflow from imaging, via virtual design, to manufacturing of patient-specific titanium reconstruction plates, cutting guide and mesh, and its utility in connection with surgical treatment of acquired bone defects in the mandible using additive manufacturing by electron beam melting (EBM). Based on computed tomography scans, polygon skulls were created. Following that virtual treatment plans entailing free microvascular transfer of fibula flaps using patient-specific reconstruction plates, mesh, and cutting guides were designed. The design was based on the specification of a Compact UniLOCK 2.4 Large (Synthes®, Switzerland). The obtained polygon plates were bent virtually round the reconstructed mandibles. Next, the resections of the mandibles were planned virtually. A cutting guide was outlined to facilitate resection, as well as plates and titanium mesh for insertion of bone or bone substitutes. Polygon plates and meshes were converted to stereolithography format and used in the software Magics for preparation of input files for the successive step, additive manufacturing. EBM was used to manufacture the customized implants in a biocompatible titanium grade, Ti6Al4V ELI. The implants and the cutting guide were cleaned and sterilized, then transferred to the operating theater, and applied during surgery. Commercially available software programs are sufficient in order to virtually plan for production of patient-specific implants. Furthermore, EBM-produced implants are fully usable under clinical conditions in reconstruction of acquired defects in the mandible. A good compliance between the treatment plan and the fit was demonstrated during operation. Within the constraints of this article, the authors describe a workflow for production of patient-specific implants, using EBM manufacturing. Titanium cutting guides, reconstruction plates for fixation of microvascular transfer of osteomyocutaneous bone grafts, and

  10. Imaging, virtual planning, design, and production of patient-specific implants and clinical validation in craniomaxillofacial surgery.

    PubMed

    Dérand, Per; Rännar, Lars-Erik; Hirsch, Jan-M

    2012-09-01

    The purpose of this article was to describe the workflow from imaging, via virtual design, to manufacturing of patient-specific titanium reconstruction plates, cutting guide and mesh, and its utility in connection with surgical treatment of acquired bone defects in the mandible using additive manufacturing by electron beam melting (EBM). Based on computed tomography scans, polygon skulls were created. Following that virtual treatment plans entailing free microvascular transfer of fibula flaps using patient-specific reconstruction plates, mesh, and cutting guides were designed. The design was based on the specification of a Compact UniLOCK 2.4 Large (Synthes(®), Switzerland). The obtained polygon plates were bent virtually round the reconstructed mandibles. Next, the resections of the mandibles were planned virtually. A cutting guide was outlined to facilitate resection, as well as plates and titanium mesh for insertion of bone or bone substitutes. Polygon plates and meshes were converted to stereolithography format and used in the software Magics for preparation of input files for the successive step, additive manufacturing. EBM was used to manufacture the customized implants in a biocompatible titanium grade, Ti6Al4V ELI. The implants and the cutting guide were cleaned and sterilized, then transferred to the operating theater, and applied during surgery. Commercially available software programs are sufficient in order to virtually plan for production of patient-specific implants. Furthermore, EBM-produced implants are fully usable under clinical conditions in reconstruction of acquired defects in the mandible. A good compliance between the treatment plan and the fit was demonstrated during operation. Within the constraints of this article, the authors describe a workflow for production of patient-specific implants, using EBM manufacturing. Titanium cutting guides, reconstruction plates for fixation of microvascular transfer of osteomyocutaneous bone grafts, and

  11. TuBaFrost 6: virtual microscopy in virtual tumour banking.

    PubMed

    Teodorovic, I; Isabelle, M; Carbone, A; Passioukov, A; Lejeune, S; Jaminé, D; Therasse, P; Gloghini, A; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; van Damme, B; van de Vijver, M; van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; Lopez-Guerrero, J A; Llombart Bosch, A; van Veen, E-B; Oosterhuis, J W; Riegman, P H J

    2006-12-01

    Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated Virtual Microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting bio-repositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).

  12. Virtual Tour Environment of Cuba's National School of Art

    NASA Astrophysics Data System (ADS)

    Napolitano, R. K.; Douglas, I. P.; Garlock, M. E.; Glisic, B.

    2017-08-01

    Innovative technologies have enabled new opportunities for collecting, analyzing, and sharing information about cultural heritage sites. Through a combination of two of these technologies, spherical imaging and virtual tour environment, we preliminarily documented one of Cuba's National Schools of Art, the National Ballet School.The Ballet School is one of the five National Art Schools built in Havana, Cuba after the revolution. Due to changes in the political climate, construction was halted on the schools before completion. The Ballet School in particular was partially completed but never used for the intended purpose. Over the years, the surrounding vegetation and environment have started to overtake the buildings; damages such as missing bricks, corroded rebar, and broken tie bars can be seen. We created a virtual tour through the Ballet School which highlights key satellite classrooms and the main domed performance spaces. Scenes of the virtual tour were captured utilizing the Ricoh Theta S spherical imaging camera and processed with Kolor Panotour virtual environment software. Different forms of data can be included in this environment in order to provide a user with pertinent information. Image galleries, hyperlinks to websites, videos, PDFs, and links to databases can be embedded within the scene and interacted with by a user. By including this information within the virtual tour, a user can better understand how the site was constructed as well as the existing types of damage. The results of this work are recommendations for how a site can be preliminarily documented and information can be initially organized and shared.

  13. [The virtual university in medicine. Context, concepts, specifications, users' manual].

    PubMed

    Duvauferrier, R; Séka, L P; Rolland, Y; Rambeau, M; Le Beux, P; Morcet, N

    1998-09-01

    The widespread use of Web servers, with the emergence of interactive functions and the possibility of credit card payment via Internet, together with the requirement for continuing education and the subsequent need for a computer to link into the health care network have incited the development of a virtual university scheme on Internet. The Virtual University of Radiology is not only a computer-assisted teaching tool with a set of attractive features, but also a powerful engine allowing the organization, distribution and control of medical knowledge available in the www.server. The scheme provides patient access to general information, a secretary's office for enrollment and the Virtual University itself, with its library, image database, a forum for subspecialties and clinical case reports, an evaluation module and various guides and help tools for diagnosis, prescription and indexing. Currently the Virtual University of Radiology offers diagnostic imaging, but can also be used by other specialties and for general practice.

  14. Practical design and evaluation methods of omnidirectional vision sensors

    NASA Astrophysics Data System (ADS)

    Ohte, Akira; Tsuzuki, Osamu

    2012-01-01

    A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.

  15. View compensated compression of volume rendered images for remote visualization.

    PubMed

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  16. Virtually numbed: immersive video gaming alters real-life experience.

    PubMed

    Weger, Ulrich W; Loughnan, Stephen

    2014-04-01

    As actors in a highly mechanized environment, we are citizens of a world populated not only by fellow humans, but also by virtual characters (avatars). Does immersive video gaming, during which the player takes on the mantle of an avatar, prompt people to adopt the coldness and rigidity associated with robotic behavior and desensitize them to real-life experience? In one study, we correlated participants' reported video-gaming behavior with their emotional rigidity (as indicated by the number of paperclips that they removed from ice-cold water). In a second experiment, we manipulated immersive and nonimmersive gaming behavior and then likewise measured the extent of the participants' emotional rigidity. Both studies yielded reliable impacts, and thus suggest that immersion into a robotic viewpoint desensitizes people to real-life experiences in oneself and others.

  17. Augmented virtuality for arthroscopic knee surgery.

    PubMed

    Li, John M; Bardana, Davide D; Stewart, A James

    2011-01-01

    This paper describes a computer system to visualize the location and alignment of an arthroscope using augmented virtuality. A 3D computer model of the patient's joint (from CT) is shown, along with a model of the tracked arthroscopic probe and the projection of the camera image onto the virtual joint. A user study, using plastic bones instead of live patients, was made to determine the effectiveness of this navigated display; the study showed that the navigated display improves target localization in novice residents.

  18. Creating photorealistic virtual model with polarization-based vision system

    NASA Astrophysics Data System (ADS)

    Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi

    2005-08-01

    Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.

  19. Treatment response assessment of radiofrequency ablation for hepatocellular carcinoma: usefulness of virtual CT sonography with magnetic navigation.

    PubMed

    Minami, Yasunori; Kitai, Satoshi; Kudo, Masatoshi

    2012-03-01

    Virtual CT sonography using magnetic navigation provides cross sectional images of CT volume data corresponding to the angle of the transducer in the magnetic field in real-time. The purpose of this study was to clarify the value of this virtual CT sonography for treatment response of radiofrequency ablation for hepatocellular carcinoma. Sixty-one patients with 88 HCCs measuring 0.5-1.3 cm (mean±SD, 1.0±0.3 cm) were treated by radiofrequency ablation. For early treatment response, dynamic CT was performed 1-5 days (median, 2 days). We compared early treatment response between axial CT images and multi-angle CT images using virtual CT sonography. Residual tumor stains on axial CT images and multi-angle CT images were detected in 11.4% (10/88) and 13.6% (12/88) after the first session of RFA, respectively (P=0.65). Two patients were diagnosed as showing hyperemia enhancement after the initial radiofrequency ablation on axial CT images and showed local tumor progression shortly because of unnoticed residual tumors. Only virtual CT sonography with magnetic navigation retrospectively showed the residual tumor as circular enhancement. In safety margin analysis, 10 patients were excluded because of residual tumors. The safety margin more than 5 mm by virtual CT sonographic images and transverse CT images were determined in 71.8% (56/78) and 82.1% (64/78), respectively (P=0.13). The safety margin should be overestimated on axial CT images in 8 nodules. Virtual CT sonography with magnetic navigation was useful in evaluating the treatment response of radiofrequency ablation therapy for hepatocellular carcinoma. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  20. A Virtual Reality Full Body Illusion Improves Body Image Disturbance in Anorexia Nervosa.

    PubMed

    Keizer, Anouk; van Elburg, Annemarie; Helms, Rossa; Dijkerman, H Chris

    2016-01-01

    Patients with anorexia nervosa (AN) have a persistent distorted experience of the size of their body. Previously we found that the Rubber Hand Illusion improves hand size estimation in this group. Here we investigated whether a Full Body Illusion (FBI) affects body size estimation of body parts more emotionally salient than the hand. In the FBI, analogue to the RHI, participants experience ownership over an entire virtual body in VR after synchronous visuo-tactile stimulation of the actual and virtual body. We asked participants to estimate their body size (shoulders, abdomen, hips) before the FBI was induced, directly after induction and at ~2 hour 45 minutes follow-up. The results showed that AN patients (N = 30) decrease the overestimation of their shoulders, abdomen and hips directly after the FBI was induced. This effect was strongest for estimates of circumference, and also observed in the asynchronous control condition of the illusion. Moreover, at follow-up, the improvements in body size estimation could still be observed in the AN group. Notably, the HC group (N = 29) also showed changes in body size estimation after the FBI, but the effect showed a different pattern than that of the AN group. The results lead us to conclude that the disturbed experience of body size in AN is flexible and can be changed, even for highly emotional body parts. As such this study offers novel starting points from which new interventions for body image disturbance in AN can be developed.

  1. Transition of a dental histology course from light to virtual microscopy.

    PubMed

    Weaker, Frank J; Herbert, Damon C

    2009-10-01

    The transition of the dental histology course at the University of Texas Health Science Center at San Antonio Dental School was completed gradually over a five-year period. A pilot project was initially conducted to study the feasibility of integrating virtual microscopy into a traditional light microscopic lecture and laboratory course. Because of the difficulty of procuring quality calcified and decalcified sections of teeth, slides from the student loan collection in the oral histology block of the course were outsourced for conversion to digital images and placed on DVDs along with a slide viewer. The slide viewer mimicked the light microscope, allowing horizontal and vertical movement and changing of magnification, and, in addition, a feature to capture static images. In a survey, students rated the ease of use of the software, quality of the images, maneuverability of the images, and questions regarding use of the software, effective use of laboratory, and faculty time. Because of the positive support from the students, our entire student loan collection of 153 glass slides was subsequently converted to virtual images and distributed on an Apricorn pocket external hard drive. Students were asked to assess the virtual microscope over a four-year period. As a result of the surveys, light microscopes have been totally eliminated, and microscope exams have been replaced with project slide examinations. In the future, we plan to expand our virtual slides and incorporate computer testing.

  2. Teaching Sexuality from Divergent Life-Style Viewpoints.

    ERIC Educational Resources Information Center

    Moy, Caryl T.; Hotvedt, Mary

    A unique approach to teaching human sexuality at the college level is to present the content and raise sociological and interpersonal value questions from different lifestyle viewpoints. Developing a course such as this has involved securing approval and encouragement from university administration who trust faculty judgment but who are under…

  3. View synthesis using parallax invariance

    NASA Astrophysics Data System (ADS)

    Dornaika, Fadi

    2001-06-01

    View synthesis becomes a focus of attention of both the computer vision and computer graphics communities. It consists of creating novel images of a scene as it would appear from novel viewpoints. View synthesis can be used in a wide variety of applications such as video compression, graphics generation, virtual reality and entertainment. This paper addresses the following problem. Given a dense disparity map between two reference images, we would like to synthesize a novel view of the same scene associated with a novel viewpoint. Most of the existing work is relying on building a set of 3D meshes which are then projected onto the new image (the rendering process is performed using texture mapping). The advantages of our view synthesis approach are as follows. First, the novel view is specified by a rotation and a translation which are the most natural way to express the virtual location of the camera. Second, the approach is able to synthesize highly realistic images whose viewing position is significantly far away from the reference viewpoints. Third, the approach is able to handle the visibility problem during the synthesis process. Our developed framework has two main steps. The first step (analysis step) consists of computing the homography at infinity, the epipoles, and thus the parallax field associated with the reference images. The second step (synthesis step) consists of warping the reference image into a new one, which is based on the invariance of the computed parallax field. The analysis step is working directly on the reference views, and only need to be performed once. Examples of synthesizing novel views using either feature correspondences or dense disparity map have demonstrated the feasibility of the proposed approach.

  4. Imaging System

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.

  5. [Clinical pathology on the verge of virtual microscopy].

    PubMed

    Tolonen, Teemu; Näpänkangas, Juha; Isola, Jorma

    2015-01-01

    For more than 100 years, examinations of pathology specimens have relied on the use of the light microscope. The technological progress of the last few years is enabling the digitizing of histologic specimen slides and application of the virtual microscope in diagnostics. Virtual microscopy will facilitate consultation possibilities, and digital image analysis serves to enhance the level of diagnostics. Organizing and monitoring clinicopathological meetings will become easier. Digital archive of histologic specimens and the virtual microscopy network are expected to benefit training and research as well, particularly what applies to the Finnish biobank network which is currently being established.

  6. An Efficient, Hierarchical Viewpoint Planning Strategy for Terrestrial Laser Scanner Networks

    NASA Astrophysics Data System (ADS)

    Jia, F.; Lichti, D. D.

    2018-05-01

    Terrestrial laser scanner (TLS) techniques have been widely adopted in a variety of applications. However, unlike in geodesy or photogrammetry, insufficient attention has been paid to the optimal TLS network design. It is valuable to develop a complete design system that can automatically provide an optimal plan, especially for high-accuracy, large-volume scanning networks. To achieve this goal, one should look at the "optimality" of the solution as well as the computational complexity in reaching it. In this paper, a hierarchical TLS viewpoint planning strategy is developed to solve the optimal scanner placement problems. If one targeted object to be scanned is simplified as discretized wall segments, any possible viewpoint can be evaluated by a score table representing its visible segments under certain scanning geometry constraints. Thus, the design goal is to find a minimum number of viewpoints that achieves complete coverage of all wall segments. The efficiency is improved by densifying viewpoints hierarchically, instead of a "brute force" search within the entire workspace. The experiment environments in this paper were simulated from two buildings located on University of Calgary campus. Compared with the "brute force" strategy in terms of the quality of the solutions and the runtime, it is shown that the proposed strategy can provide a scanning network with a compatible quality but with more than a 70 % time saving.

  7. What are we missing? Advantages of more than one viewpoint to estimate fish assemblages using baited video

    PubMed Central

    Huveneers, Charlie; Fairweather, Peter G.

    2018-01-01

    Counting errors can bias assessments of species abundance and richness, which can affect assessments of stock structure, population structure and monitoring programmes. Many methods for studying ecology use fixed viewpoints (e.g. camera traps, underwater video), but there is little known about how this biases the data obtained. In the marine realm, most studies using baited underwater video, a common method for monitoring fish and nekton, have previously only assessed fishes using a single bait-facing viewpoint. To investigate the biases stemming from using fixed viewpoints, we added cameras to cover 360° views around the units. We found similar species richness for all observed viewpoints but the bait-facing viewpoint recorded the highest fish abundance. Sightings of infrequently seen and shy species increased with the additional cameras and the extra viewpoints allowed the abundance estimates of highly abundant schooling species to be up to 60% higher. We specifically recommend the use of additional cameras for studies focusing on shyer species or those particularly interested in increasing the sensitivity of the method by avoiding saturation in highly abundant species. Studies may also benefit from using additional cameras to focus observation on the downstream viewpoint. PMID:29892386

  8. Should Dental Schools Invest in Training Predoctoral Students for Academic Careers? Two Viewpoints: Viewpoint 1: Dental Schools Should Add Academic Careers Training to Their Predoctoral Curricula to Enhance Faculty Recruitment and Viewpoint 2: Addition of Academic Careers Training for All Predoctoral Students Would Be Inefficient and Ineffective.

    PubMed

    Fung, Brent; Fatahzadeh, Mahnaz; Kirkwood, Keith L; Hicks, Jeffery; Timmons, Sherry R

    2018-04-01

    This Point/Counterpoint considers whether providing dental students with academic career training and teaching experiences during their predoctoral education would be valuable to recruit dental academicians. While training the next generation of dentists continues to be the primary focus for dental schools, the cultivation and recruitment of dental faculty members from the pool of dental students remain challenges. Viewpoint 1 supports the position that providing dental students with exposure to academic career opportunities has positive value in recruiting new dental faculty. The advantages of academic careers training as a required educational experience in dental schools and as a potential means to recruit dental students into the ranks of faculty are described in this viewpoint. In contrast, Viewpoint 2 contends that such career exposure has limited value and argues that, across the board, allocation of resources to support preparation for academic careers would have a poor cost-benefit return on investment. Adding a requirement for educational experiences for all students would overburden institutions, students, and faculty according to this viewpoint. The authors agree that research is needed to determine how and where to make predoctoral curricular changes that will have maximum impact on academic recruitment.

  9. Newborns' Face Recognition over Changes in Viewpoint

    ERIC Educational Resources Information Center

    Turati, Chiara; Bulf, Hermann; Simion, Francesca

    2008-01-01

    The study investigated the origins of the ability to recognize faces despite rotations in depth. Four experiments are reported that tested, using the habituation technique, whether 1-to-3-day-old infants are able to recognize the invariant aspects of a face over changes in viewpoint. Newborns failed to recognize facial perceptual invariances…

  10. Virtual k -Space Modulation Optical Microscopy

    NASA Astrophysics Data System (ADS)

    Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Zheng, Guoan; Fang, Yue; Xu, Yingke; Liu, Xu; So, Peter T. C.

    2016-07-01

    We report a novel superresolution microscopy approach for imaging fluorescence samples. The reported approach, termed virtual k -space modulation optical microscopy (VIKMOM), is able to improve the lateral resolution by a factor of 2, reduce the background level, improve the optical sectioning effect and correct for unknown optical aberrations. In the acquisition process of VIKMOM, we used a scanning confocal microscope setup with a 2D detector array to capture sample information at each scanned x -y position. In the recovery process of VIKMOM, we first modulated the captured data by virtual k -space coding and then employed a ptychography-inspired procedure to recover the sample information and correct for unknown optical aberrations. We demonstrated the performance of the reported approach by imaging fluorescent beads, fixed bovine pulmonary artery endothelial (BPAE) cells, and living human astrocytes (HA). As the VIKMOM approach is fully compatible with conventional confocal microscope setups, it may provide a turn-key solution for imaging biological samples with ˜100 nm lateral resolution, in two or three dimensions, with improved optical sectioning capabilities and aberration correcting.

  11. Stapleton Microburst Advisory Service Project : An Operational Viewpoint.

    DOT National Transportation Integrated Search

    1985-09-01

    A microburst advisory service project was conducted at Stapleton Airport for a six week period during the summer of 1984. This report describes what took place during the project and what was learned from an operational, air traffic control viewpoint...

  12. Visual appearance of a virtual upper limb modulates the temperature of the real hand: a thermal imaging study in Immersive Virtual Reality.

    PubMed

    Tieri, Gaetano; Gioia, Annamaria; Scandola, Michele; Pavone, Enea F; Aglioti, Salvatore M

    2017-05-01

    To explore the link between Sense of Embodiment (SoE) over a virtual hand and physiological regulation of skin temperature, 24 healthy participants were immersed in virtual reality through a Head Mounted Display and had their real limb temperature recorded by means of a high-sensitivity infrared camera. Participants observed a virtual right upper limb (appearing either normally, or with the hand detached from the forearm) or limb-shaped non-corporeal control objects (continuous or discontinuous wooden blocks) from a first-person perspective. Subjective ratings of SoE were collected in each observation condition, as well as temperatures of the right and left hand, wrist and forearm. The observation of these complex, body and body-related virtual scenes resulted in increased real hand temperature when compared to a baseline condition in which a 3d virtual ball was presented. Crucially, observation of non-natural appearances of the virtual limb (discontinuous limb) and limb-shaped non-corporeal objects elicited high increase in real hand temperature and low SoE. In contrast, observation of the full virtual limb caused high SoE and low temperature changes in the real hand with respect to the other conditions. Interestingly, the temperature difference across the different conditions occurred according to a topographic rule that included both hands. Our study sheds new light on the role of an external hand's visual appearance and suggests a tight link between higher-order bodily self-representations and topographic regulation of skin temperature. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  13. Viewpoints of Higher Education Teachers about Technologies

    ERIC Educational Resources Information Center

    Bouras, Adel; Albe, Virginie

    2008-01-01

    In the context of recent debates on technological literacy, a renewed research effort has focused on the nature of technologies. The aim of this work, which considers "epistemological knowledge" as viewpoints which spring into gear in a given situation, is to use questionnaires and interviews to identify the opinions of teachers in a training…

  14. Detection of Bone Marrow Edema in Nondisplaced Hip Fractures: Utility of a Virtual Noncalcium Dual-Energy CT Application.

    PubMed

    Kellock, Trenton T; Nicolaou, Savvas; Kim, Sandra S Y; Al-Busaidi, Sultan; Louis, Luck J; O'Connell, Tim W; Ouellette, Hugue A; McLaughlin, Patrick D

    2017-09-01

    Purpose To quantify the sensitivity and specificity of dual-energy computed tomographic (CT) virtual noncalcium images in the detection of nondisplaced hip fractures and to assess whether obtaining these images as a complement to bone reconstructions alters sensitivity, specificity, or diagnostic confidence. Materials and Methods The clinical research ethics board approved chart review, and the requirement to obtain informed consent was waived. The authors retrospectively identified 118 patients who presented to a level 1 trauma center emergency department and who underwent dual-energy CT for suspicion of a nondisplaced traumatic hip fracture. Clinical follow-up was the standard of reference. Three radiologists interpreted virtual noncalcium images for traumatic bone marrow edema. Bone reconstructions for the same cases were interpreted alone and then with virtual noncalcium images. Diagnostic confidence was rated on a scale of 1 to 10. McNemar, Fleiss κ, and Wilcoxon signed-rank tests were used for statistical analysis. Results Twenty-two patients had nondisplaced hip fractures and 96 did not have hip fractures. Sensitivity with virtual noncalcium images was 77% and 91% (17 and 20 of 22 patients), and specificity was 92%-99% (89-95 of 96 patients). Sensitivity increased by 4%-5% over that with bone reconstruction images alone for two of the three readers when both bone reconstruction and virtual noncalcium images were used. Specificity remained unchanged (99% and 100%). Diagnostic confidence in the exclusion of fracture was improved with combined bone reconstruction and virtual noncalcium images (median score: 10, 9, and 10 for readers 1, 2, and 3, respectively) compared with bone reconstruction images alone (median score: 9, 8, and 9). Conclusion When used as a supplement to standard bone reconstructions, dual-energy CT virtual noncalcium images increased sensitivity for the detection of nondisplaced traumatic hip fractures and improved diagnostic confidence in

  15. Virtual goods recommendations in virtual worlds.

    PubMed

    Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren

    2015-01-01

    Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods.

  16. Virtual Goods Recommendations in Virtual Worlds

    PubMed Central

    Chen, Kuan-Yu; Liao, Hsiu-Yu; Chen, Jyun-Hung; Liu, Duen-Ren

    2015-01-01

    Virtual worlds (VWs) are computer-simulated environments which allow users to create their own virtual character as an avatar. With the rapidly growing user volume in VWs, platform providers launch virtual goods in haste and stampede users to increase sales revenue. However, the rapidity of development incurs virtual unrelated items which will be difficult to remarket. It not only wastes virtual global companies' intelligence resources, but also makes it difficult for users to find suitable virtual goods fit for their virtual home in daily virtual life. In the VWs, users decorate their houses, visit others' homes, create families, host parties, and so forth. Users establish their social life circles through these activities. This research proposes a novel virtual goods recommendation method based on these social interactions. The contact strength and contact influence result from interactions with social neighbors and influence users' buying intention. Our research highlights the importance of social interactions in virtual goods recommendation. The experiment's data were retrieved from an online VW platform, and the results show that the proposed method, considering social interactions and social life circle, has better performance than existing recommendation methods. PMID:25834837

  17. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    PubMed Central

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2013-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system. PMID:23366803

  18. Pictorial communication in virtual and real environments

    NASA Technical Reports Server (NTRS)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  19. Viewpoints of fertile women on gestational surrogacy in East Azerbaijan Province, Iran.

    PubMed

    Rahmani, Azad; Howard, Fuchsia; Sattarzadeh, Nilofar; Ferguson, Caleb; Asgari, Afsaneh; Ebrahimi, Hossein

    2014-01-01

    The aim of this descriptive, cross-sectional study was to investigate the viewpoint of fertile Iranian women on gestational surrogacy. A convenience sample of 230 fertile women was invited to participate in the study and 185 consented. Data were collected via a 22-item scale that assessed the viewpoints of the participants in five domains related to gestational surrogacy. The viewpoints reported by the women were positive. However, a significant percentage of them believed that commissioning couples are not the biological owners of the baby, religious barriers need to be overcome prior to legal barriers, children born through surrogacy may face emotional issues, and the adoption of children may be a better option than surrogacy. The negative views of the women on some key aspects make it clear that public education is needed to increase the acceptability of gestational surrogacy.

  20. Real-time, rapidly updating severe weather products for virtual globes

    NASA Astrophysics Data System (ADS)

    Smith, Travis M.; Lakshmanan, Valliappa

    2011-01-01

    It is critical that weather forecasters are able to put severe weather information from a variety of observational and modeling platforms into a geographic context so that warning information can be effectively conveyed to the public, emergency managers, and disaster response teams. The availability of standards for the specification and transport of virtual globe data products has made it possible to generate spatially precise, geo-referenced images and to distribute these centrally created products via a web server to a wide audience. In this paper, we describe the data and methods for enabling severe weather threat analysis information inside a KML framework. The method of creating severe weather diagnosis products that are generated and translating them to KML and image files is described. We illustrate some of the practical applications of these data when they are integrated into a virtual globe display. The availability of standards for interoperable virtual globe clients has not completely alleviated the need for custom solutions. We conclude by pointing out several of the limitations of the general-purpose virtual globe clients currently available.

  1. First Person Perspective of Seated Participants Over a Walking Virtual Body Leads to Illusory Agency Over the Walking.

    PubMed

    Kokkinara, Elena; Kilteni, Konstantina; Blom, Kristopher J; Slater, Mel

    2016-07-01

    Agency, the attribution of authorship to an action of our body, requires the intention to carry out the action, and subsequently a match between its predicted and actual sensory consequences. However, illusory agency can be generated through priming of the action together with perception of bodily action, even when there has been no actual corresponding action. Here we show that participants can have the illusion of agency over the walking of a virtual body even though in reality they are seated and only allowed head movements. The experiment (n = 28) had two factors: Perspective (1PP or 3PP) and Head Sway (Sway or NoSway). Participants in 1PP saw a life-sized virtual body spatially coincident with their own from a first person perspective, or the virtual body from third person perspective (3PP). In the Sway condition the viewpoint included a walking animation, but not in NoSway. The results show strong illusions of body ownership, agency and walking, in the 1PP compared to the 3PP condition, and an enhanced level of arousal while the walking was up a virtual hill. Sway reduced the level of agency. We conclude with a discussion of the results in the light of current theories of agency.

  2. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    PubMed

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  3. A Virtual Reality Full Body Illusion Improves Body Image Disturbance in Anorexia Nervosa

    PubMed Central

    Keizer, Anouk; van Elburg, Annemarie; Helms, Rossa; Dijkerman, H. Chris

    2016-01-01

    Background Patients with anorexia nervosa (AN) have a persistent distorted experience of the size of their body. Previously we found that the Rubber Hand Illusion improves hand size estimation in this group. Here we investigated whether a Full Body Illusion (FBI) affects body size estimation of body parts more emotionally salient than the hand. In the FBI, analogue to the RHI, participants experience ownership over an entire virtual body in VR after synchronous visuo-tactile stimulation of the actual and virtual body. Methods and Results We asked participants to estimate their body size (shoulders, abdomen, hips) before the FBI was induced, directly after induction and at ~2 hour 45 minutes follow-up. The results showed that AN patients (N = 30) decrease the overestimation of their shoulders, abdomen and hips directly after the FBI was induced. This effect was strongest for estimates of circumference, and also observed in the asynchronous control condition of the illusion. Moreover, at follow-up, the improvements in body size estimation could still be observed in the AN group. Notably, the HC group (N = 29) also showed changes in body size estimation after the FBI, but the effect showed a different pattern than that of the AN group. Conclusion The results lead us to conclude that the disturbed experience of body size in AN is flexible and can be changed, even for highly emotional body parts. As such this study offers novel starting points from which new interventions for body image disturbance in AN can be developed. PMID:27711234

  4. Virtual exertions: evoking the sense of exerting forces in virtual reality using gestures and muscle activity.

    PubMed

    Chen, Karen B; Ponto, Kevin; Tredinnick, Ross D; Radwin, Robert G

    2015-06-01

    This study was a proof of concept for virtual exertions, a novel method that involves the use of body tracking and electromyography for grasping and moving projections of objects in virtual reality (VR). The user views objects in his or her hands during rehearsed co-contractions of the same agonist-antagonist muscles normally used for the desired activities to suggest exerting forces. Unlike physical objects, virtual objects are images and lack mass. There is currently no practical physically demanding way to interact with virtual objects to simulate strenuous activities. Eleven participants grasped and lifted similar physical and virtual objects of various weights in an immersive 3-D Cave Automatic Virtual Environment. Muscle activity, localized muscle fatigue, ratings of perceived exertions, and NASA Task Load Index were measured. Additionally, the relationship between levels of immersion (2-D vs. 3-D) was studied. Although the overall magnitude of biceps activity and workload were greater in VR, muscle activity trends and fatigue patterns for varying weights within VR and physical conditions were the same. Perceived exertions for varying weights were not significantly different between VR and physical conditions. Perceived exertion levels and muscle activity patterns corresponded to the assigned virtual loads, which supported the hypothesis that the method evoked the perception of physical exertions and showed that the method was promising. Ultimately this approach may offer opportunities for research and training individuals to perform strenuous activities under potentially safer conditions that mimic situations while seeing their own body and hands relative to the scene. © 2014, Human Factors and Ergonomics Society.

  5. GAIA virtual observatory - development and practices

    NASA Astrophysics Data System (ADS)

    Syrjäsuo, Mikko; Marple, Steve

    2010-05-01

    The Global Auroral Imaging Access, or GAIA, is a virtual observatory providing quick access to summary data from satellite and ground-based instruments that remote sense auroral precipitation (http://gaia-vxo.org). This web-based service facilitates locating data relevant to particular events by simultaneously displaying summary images from various data sets around the world. At the moment, there are GAIA server nodes in Canada, Finland, Norway and the UK. The development is an international effort and the software and metadata are freely available. The GAIA system is based on a relational database which is queried by a dedicated software suite that also creates the graphical end-user interface if such is needed. Most commonly, the virtual observatory is used interactively by using a web browser: the user provides the date and the type of data of interest. As the summary data from multiple instruments are displayed simultaneously, the user can conveniently explore the recorded data. The virtual observatory provides essentially instant access to the images originating from all major auroral instrument networks including THEMIS, NORSTAR, GLORIA and MIRACLE. The scientific, educational and outreach use is limited by creativity rather than access. The first version of the GAIA was developed at the University of Calgary (Alberta, Canada) in 2004-2005. This proof-of-concept included mainly THEMIS and MIRACLE data, which comprised of millions of summary plots and thumbnail images. However, it was soon realised that a complete re-design was necessary to increase flexibility. In the presentation, we will discuss the early history and motivation of GAIA as well as how the development continued towards the current version. The emphasis will be on practical problems and their solutions. Relevant design choices will also be highlighted.

  6. Motion parallax in immersive cylindrical display systems

    NASA Astrophysics Data System (ADS)

    Filliard, N.; Reymond, G.; Kemeny, A.; Berthoz, A.

    2012-03-01

    Motion parallax is a crucial visual cue produced by translations of the observer for the perception of depth and selfmotion. Therefore, tracking the observer viewpoint has become inevitable in immersive virtual (VR) reality systems (cylindrical screens, CAVE, head mounted displays) used e.g. in automotive industry (style reviews, architecture design, ergonomics studies) or in scientific studies of visual perception. The perception of a stable and rigid world requires that this visual cue be coherent with other extra-retinal (e.g. vestibular, kinesthetic) cues signaling ego-motion. Although world stability is never questioned in real world, rendering head coupled viewpoint in VR can lead to the perception of an illusory perception of unstable environments, unless a non-unity scale factor is applied on recorded head movements. Besides, cylindrical screens are usually used with static observers due to image distortions when rendering image for viewpoints different from a sweet spot. We developed a technique to compensate in real-time these non-linear visual distortions, in an industrial VR setup, based on a cylindrical screen projection system. Additionally, to evaluate the amount of discrepancies tolerated without perceptual distortions between visual and extraretinal cues, a "motion parallax gain" between the velocity of the observer's head and that of the virtual camera was introduced in this system. The influence of this artificial gain was measured on the gait stability of free-standing participants. Results indicate that, below unity, gains significantly alter postural control. Conversely, the influence of higher gains remains limited, suggesting a certain tolerance of observers to these conditions. Parallax gain amplification is therefore proposed as a possible solution to provide a wider exploration of space to users of immersive virtual reality systems.

  7. Virtual Labs and Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Boehler, Ted

    2006-12-01

    Virtual Labs and Virtual Worlds Coastline Community College has under development several virtual lab simulations and activities that range from biology, to language labs, to virtual discussion environments. Imagine a virtual world that students enter online, by logging onto their computer from home or anywhere they have web access. Upon entering this world they select a personalized identity represented by a digitized character (avatar) that can freely move about, interact with the environment, and communicate with other characters. In these virtual worlds, buildings, gathering places, conference rooms, labs, science rooms, and a variety of other “real world” elements are evident. When characters move about and encounter other people (players) they may freely communicate. They can examine things, manipulate objects, read signs, watch video clips, hear sounds, and jump to other locations. Goals of critical thinking, social interaction, peer collaboration, group support, and enhanced learning can be achieved in surprising new ways with this innovative approach to peer-to-peer communication in a virtual discussion world. In this presentation, short demos will be given of several online learning environments including a virtual biology lab, a marine science module, a Spanish lab, and a virtual discussion world. Coastline College has been a leader in the development of distance learning and media-based education for nearly 30 years and currently offers courses through PDA, Internet, DVD, CD-ROM, TV, and Videoconferencing technologies. Its distance learning program serves over 20,000 students every year. sponsor Jerry Meisner

  8. Virtual Worlds for Virtual Organizing

    NASA Astrophysics Data System (ADS)

    Rhoten, Diana; Lutters, Wayne

    The members and resources of a virtual organization are dispersed across time and space, yet they function as a coherent entity through the use of technologies, networks, and alliances. As virtual organizations proliferate and become increasingly important in society, many may exploit the technical architecture s of virtual worlds, which are the confluence of computer-mediated communication, telepresence, and virtual reality originally created for gaming. A brief socio-technical history describes their early origins and the waves of progress followed by stasis that brought us to the current period of renewed enthusiasm. Examination of contemporary examples demonstrates how three genres of virtual worlds have enabled new arenas for virtual organizing: developer-defined closed worlds, user-modifiable quasi-open worlds, and user-generated open worlds. Among expected future trends are an increase in collaboration born virtually rather than imported from existing organizations, a tension between high-fidelity recreations of the physical world and hyper-stylized imaginations of fantasy worlds, and the growth of specialized worlds optimized for particular sectors, companies, or cultures.

  9. Role of post-mapping computed tomography in virtual-assisted lung mapping.

    PubMed

    Sato, Masaaki; Nagayama, Kazuhiro; Kuwano, Hideki; Nitadori, Jun-Ichi; Anraku, Masaki; Nakajima, Jun

    2017-02-01

    Background Virtual-assisted lung mapping is a novel bronchoscopic preoperative lung marking technique in which virtual bronchoscopy is used to predict the locations of multiple dye markings. Post-mapping computed tomography is performed to confirm the locations of the actual markings. This study aimed to examine the accuracy of marking locations predicted by virtual bronchoscopy and elucidate the role of post-mapping computed tomography. Methods Automated and manual virtual bronchoscopy was used to predict marking locations. After bronchoscopic dye marking under local anesthesia, computed tomography was performed to confirm the actual marking locations before surgery. Discrepancies between marking locations predicted by the different methods and the actual markings were examined on computed tomography images. Forty-three markings in 11 patients were analyzed. Results The average difference between the predicted and actual marking locations was 30 mm. There was no significant difference between the latest version of the automated virtual bronchoscopy system (30.7 ± 17.2 mm) and manual virtual bronchoscopy (29.8 ± 19.1 mm). The difference was significantly greater in the upper vs. lower lobes (37.1 ± 20.1 vs. 23.0 ± 6.8 mm, for automated virtual bronchoscopy; p < 0.01). Despite this discrepancy, all targeted lesions were successfully resected using 3-dimensional image guidance based on post-mapping computed tomography reflecting the actual marking locations. Conclusions Markings predicted by virtual bronchoscopy were dislocated from the actual markings by an average of 3 cm. However, surgery was accurately performed using post-mapping computed tomography guidance, demonstrating the indispensable role of post-mapping computed tomography in virtual-assisted lung mapping.

  10. Three-dimensional virtual bronchoscopy using a tablet computer to guide real-time transbronchial needle aspiration.

    PubMed

    Fiorelli, Alfonso; Raucci, Antonio; Cascone, Roberto; Reginelli, Alfonso; Di Natale, Davide; Santoriello, Carlo; Capuozzo, Antonio; Grassi, Roberto; Serra, Nicola; Polverino, Mario; Santini, Mario

    2017-04-01

    We proposed a new virtual bronchoscopy tool to improve the accuracy of traditional transbronchial needle aspiration for mediastinal staging. Chest-computed tomographic images (1 mm thickness) were reconstructed with Osirix software to produce a virtual bronchoscopic simulation. The target adenopathy was identified by measuring its distance from the carina on multiplanar reconstruction images. The static images were uploaded in iMovie Software, which produced a virtual bronchoscopic movie from the images; the movie was then transferred to a tablet computer to provide real-time guidance during a biopsy. To test the validity of our tool, we divided all consecutive patients undergoing transbronchial needle aspiration retrospectively in two groups based on whether the biopsy was guided by virtual bronchoscopy (virtual bronchoscopy group) or not (traditional group). The intergroup diagnostic yields were statistically compared. Our analysis included 53 patients in the traditional and 53 in the virtual bronchoscopy group. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy for the traditional group were 66.6%, 100%, 100%, 10.53% and 67.92%, respectively, and for the virtual bronchoscopy group were 84.31%, 100%, 100%, 20% and 84.91%, respectively. The sensitivity ( P  = 0.011) and diagnostic accuracy ( P  = 0.011) of sampling the paratracheal station were better for the virtual bronchoscopy group than for the traditional group; no significant differences were found for the subcarinal lymph node. Our tool is simple, economic and available in all centres. It guided in real time the needle insertion, thereby improving the accuracy of traditional transbronchial needle aspiration, especially when target lesions are located in a difficult site like the paratracheal station. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  11. Application of Curved MPR Algorithm to High Resolution 3 Dimensional T2 Weighted CISS Images for Virtual Uncoiling of Membranous Cochlea as an Aid for Cochlear Morphometry.

    PubMed

    Kumar, Joish Upendra; Kavitha, Y

    2017-02-01

    With the use of various surgical techniques, types of implants, the preoperative assessment of cochlear dimensions is becoming increasingly relevant prior to cochlear implantation. High resolution CISS protocol MRI gives a better assessment of membranous cochlea, cochlear nerve, and membranous labyrinth. Curved Multiplanar Reconstruction (MPR) algorithm provides better images that can be used for measuring dimensions of membranous cochlea. To ascertain the value of curved multiplanar reconstruction algorithm in high resolution 3-Dimensional T2 Weighted Gradient Echo Constructive Interference Steady State (3D T2W GRE CISS) imaging for accurate morphometry of membranous cochlea. Fourteen children underwent MRI for inner ear assessment. High resolution 3D T2W GRE CISS sequence was used to obtain images of cochlea. Curved MPR reconstruction algorithm was used to virtually uncoil the membranous cochlea on the volume images and cochlear measurements were done. Virtually uncoiled images of membranous cochlea of appropriate resolution were obtained from the volume data obtained from the high resolution 3D T2W GRE CISS images, after using curved MPR reconstruction algorithm mean membranous cochlear length in the children was 27.52 mm. Maximum apical turn diameter of membranous cochlea was 1.13 mm, mid turn diameter was 1.38 mm, basal turn diameter was 1.81 mm. Curved MPR reconstruction algorithm applied to CISS protocol images facilitates in getting appropriate quality images of membranous cochlea for accurate measurements.

  12. Virtual slides in peer reviewed, open access medical publication

    PubMed Central

    2011-01-01

    Background Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Approach Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. Technology and Performance The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Results and Perspectives Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of

  13. Exploring 4D Flow Data in an Immersive Virtual Environment

    NASA Astrophysics Data System (ADS)

    Stevens, A. H.; Butkiewicz, T.

    2017-12-01

    Ocean models help us to understand and predict a wide range of intricate physical processes which comprise the atmospheric and oceanic systems of the Earth. Because these models output an abundance of complex time-varying three-dimensional (i.e., 4D) data, effectively conveying the myriad information from a given model poses a significant visualization challenge. The majority of the research effort into this problem has concentrated around synthesizing and examining methods for representing the data itself; by comparison, relatively few studies have looked into the potential merits of various viewing conditions and virtual environments. We seek to improve our understanding of the benefits offered by current consumer-grade virtual reality (VR) systems through an immersive, interactive 4D flow visualization system. Our dataset is a Regional Ocean Modeling System (ROMS) model representing a 12-hour tidal cycle of the currents within New Hampshire's Great Bay estuary. The model data was loaded into a custom VR particle system application using the OpenVR software library and the HTC Vive hardware, which tracks a headset and two six-degree-of-freedom (6DOF) controllers within a 5m-by-5m area. The resulting visualization system allows the user to coexist in the same virtual space as the data, enabling rapid and intuitive analysis of the flow model through natural interactions with the dataset and within the virtual environment. Whereas a traditional computer screen typically requires the user to reposition a virtual camera in the scene to obtain the desired view of the data, in virtual reality the user can simply move their head to the desired viewpoint, completely eliminating the mental context switches from data exploration/analysis to view adjustment and back. The tracked controllers become tools to quickly manipulate (reposition, reorient, and rescale) the dataset and to interrogate it by, e.g., releasing dye particles into the flow field, probing scalar velocities

  14. Dual-Energy CT in Enhancing Subdural Effusions that Masquerade as Subdural Hematomas: Diagnosis with Virtual High-Monochromatic (190-keV) Images.

    PubMed

    Bodanapally, U K; Dreizin, D; Issa, G; Archer-Arroyo, K L; Sudini, K; Fleiter, T R

    2017-10-01

    Extravasation of iodinated contrast into subdural space following contrast-enhanced radiographic studies results in hyperdense subdural effusions, which can be mistaken as acute subdural hematomas on follow-up noncontrast head CTs. Our aim was to identify the factors associated with contrast-enhancing subdural effusion, characterize diffusion and washout kinetics of iodine in enhancing subdural effusion, and assess the utility of dual-energy CT in differentiating enhancing subdural effusion from subdural hematoma. We retrospectively analyzed follow-up head dual-energy CT studies in 423 patients with polytrauma who had undergone contrast-enhanced whole-body CT. Twenty-four patients with enhancing subdural effusion composed the study group, and 24 randomly selected patients with subdural hematoma were enrolled in the comparison group. Postprocessing with syngo.via was performed to determine the diffusion and washout kinetics of iodine. The sensitivity and specificity of dual-energy CT for the diagnosis of enhancing subdural effusion were determined with 120-kV, virtual monochromatic energy (190-keV) and virtual noncontrast images. Patients with enhancing subdural effusion were significantly older (mean, 69 years; 95% CI, 60-78 years; P < .001) and had a higher incidence of intracranial hemorrhage ( P = .001). Peak iodine concentration in enhancing subdural effusions was reached within the first 8 hours of contrast administration with a mean of 0.98 mg/mL (95% CI, 0.81-1.13 mg/mL), and complete washout was achieved at 38 hours. For the presence of a hyperdense subdural collection on 120-kV images with a loss of hyperattenuation on 190-keV and virtual noncontrast images, when considered as a true-positive for enhancing subdural effusion, the sensitivity was 100% (95% CI, 85.75%-100%) and the specificity was 91.67% (95% CI, 73%-99%). Dual-energy CT has a high sensitivity and specificity in differentiating enhancing subdural effusion from subdural hematoma. Hence, dual

  15. High spatial resolution imaging for structural health monitoring based on virtual time reversal

    NASA Astrophysics Data System (ADS)

    Cai, Jian; Shi, Lihua; Yuan, Shenfang; Shao, Zhixue

    2011-05-01

    Lamb waves are widely used in structural health monitoring (SHM) of plate-like structures. Due to the dispersion effect, Lamb wavepackets will be elongated and the resolution for damage identification will be strongly affected. This effect can be automatically compensated by the time reversal process (TRP). However, the time information of the compensated waves is also removed at the same time. To improve the spatial resolution of Lamb wave detection, virtual time reversal (VTR) is presented in this paper. In VTR, a changing-element excitation and reception mechanism (CERM) rather than the traditional fixed excitation and reception mechanism (FERM) is adopted for time information conservation. Furthermore, the complicated TRP procedure is replaced by simple signal operations which can make savings in the hardware cost for recording and generating the time-reversed Lamb waves. After the effects of VTR for dispersive damage scattered signals are theoretically analyzed, the realization of VTR involving the acquisition of the transfer functions of damage detecting paths under step pulse excitation is discussed. Then, a VTR-based imaging method is developed to improve the spatial resolution of the delay-and-sum imaging with a sparse piezoelectric (PZT) wafer array. Experimental validation indicates that the damage scattered wavepackets of A0 mode in an aluminum plate are partly recompressed and focalized with their time information preserved by VTR. Both the single damage and the dual adjacent damages in the plate can be clearly displayed with high spatial resolution by the proposed VTR-based imaging method.

  16. An Accurate Framework for Arbitrary View Pedestrian Detection in Images

    NASA Astrophysics Data System (ADS)

    Fan, Y.; Wen, G.; Qiu, S.

    2018-01-01

    We consider the problem of detect pedestrian under from images collected under various viewpoints. This paper utilizes a novel framework called locality-constrained affine subspace coding (LASC). Firstly, the positive training samples are clustered into similar entities which represent similar viewpoint. Then Principal Component Analysis (PCA) is used to obtain the shared feature of each viewpoint. Finally, the samples that can be reconstructed by linear approximation using their top- k nearest shared feature with a small error are regarded as a correct detection. No negative samples are required for our method. Histograms of orientated gradient (HOG) features are used as the feature descriptors, and the sliding window scheme is adopted to detect humans in images. The proposed method exploits the sparse property of intrinsic information and the correlations among the multiple-views samples. Experimental results on the INRIA and SDL human datasets show that the proposed method achieves a higher performance than the state-of-the-art methods in form of effect and efficiency.

  17. JAMSTEC E-library of Deep-sea Images (J-EDI) Realizes a Virtual Journey to the Earth's Unexplored Deep Ocean

    NASA Astrophysics Data System (ADS)

    Sasaki, T.; Azuma, S.; Matsuda, S.; Nagayama, A.; Ogido, M.; Saito, H.; Hanafusa, Y.

    2016-12-01

    The Japan Agency for Marine-Earth Science and Technology (JAMSTEC) archives a large amount of deep-sea research videos and photos obtained by JAMSTEC's research submersibles and vehicles with cameras. The web site "JAMSTEC E-library of Deep-sea Images : J-EDI" (http://www.godac.jamstec.go.jp/jedi/e/) has made videos and photos available to the public via the Internet since 2011. Users can search for target videos and photos by keywords, easy-to-understand icons, and dive information at J-EDI because operating staffs classify videos and photos as to contents, e.g. living organism and geological environment, and add comments to them.Dive survey data including videos and photos are not only valiant academically but also helpful for education and outreach activities. With the aim of the improvement of visibility for broader communities, we added new functions of 3-dimensional display synchronized various dive survey data with videos in this year.New Functions Users can search for dive survey data by 3D maps with plotted dive points using the WebGL virtual map engine "Cesium". By selecting a dive point, users can watch deep-sea videos and photos and associated environmental data, e.g. water temperature, salinity, rock and biological sample photos, obtained by the dive survey. Users can browse a dive track visualized in 3D virtual spaces using the WebGL JavaScript library. By synchronizing this virtual dive track with videos, users can watch deep-sea videos recorded at a point on a dive track. Users can play an animation which a submersible-shaped polygon automatically traces a 3D virtual dive track and displays of dive survey data are synchronized with tracing a dive track. Users can directly refer to additional information of other JAMSTEC data sites such as marine biodiversity database, marine biological sample database, rock sample database, and cruise and dive information database, on each page which a 3D virtual dive track is displayed. A 3D visualization of a dive

  18. Virtual Laboratories and Virtual Worlds

    NASA Astrophysics Data System (ADS)

    Hut, Piet

    2008-05-01

    Since we cannot put stars in a laboratory, astrophysicists had to wait till the invention of computers before becoming laboratory scientists. For half a century now, we have been conducting experiments in our virtual laboratories. However, we ourselves have remained behind the keyboard, with the screen of the monitor separating us from the world we are simulating. Recently, 3D on-line technology, developed first for games but now deployed in virtual worlds like Second Life, is beginning to make it possible for astrophysicists to enter their virtual labs themselves, in virtual form as avatars. This has several advantages, from new possibilities to explore the results of the simulations to a shared presence in a virtual lab with remote collaborators on different continents. I will report my experiences with the use of Qwaq Forums, a virtual world developed by a new company (see http://www.qwaq.com).

  19. Integral imaging with multiple image planes using a uniaxial crystal plate.

    PubMed

    Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho

    2003-08-11

    Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.

  20. Teacher Viewpoints of Instructional Design Principles for Visuals in a Middle School Math Curriculum

    ERIC Educational Resources Information Center

    Clinton, Virginia; Cooper, Jennifer L.

    2015-01-01

    Instructional design principles for visuals in student materials have been developed through findings based on student-level measures. However, teacher viewpoints may be a rich source of information to better understand how visuals can be optimized for student learning. This study's purpose is to examine teacher viewpoints on visuals. In a…

  1. The geometrical precision of virtual bone models derived from clinical computed tomography data for forensic anthropology.

    PubMed

    Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J

    2017-07-01

    Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.

  2. TU-EF-204-12: Quantitative Evaluation of Spectral Detector CT Using Virtual Monochromatic Images: Initial Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, X; Guild, J; Arbique, G

    2015-06-15

    Purpose To evaluate the image quality and spectral information of a spectral detector CT (SDCT) scanner using virtual monochromatic (VM) energy images. Methods The SDCT scanner (Philips Healthcare) was equipped with a dual-layer detector and spectral iterative reconstruction (IR), which generates conventional 80–140 kV polychromatic energy (PE) CT images using both detector layers, PE images from the low-energy (upper) and high-energy (lower) detector layers and VM images. A solid water phantom with iodine (2.0–20.0 mg I/ml) and calcium (50.0–600.0 mg Ca/ml) rod inserts was used to evaluate effective energy estimate (EEE) and iodine contrast to noise ratio (CNR). The EEEmore » corresponding to an insert CT number in a PE image was calculated from a CT number fit to the VM image set. Since PE image is prone to beam-hardening artifact EEE may underestimate the actual energy separation from two layers of the detector. A 30-cm-diameter water phantom was used to evaluate noise power spectrum (NPS). The phantoms were scanned at 120 and 140 kV with the same CTDIvol. Results The CT number difference for contrast inserts in VM images (50–150 keV) was 1.3±6% between 120 and 140 kV scans. The difference of EEE calculated from low- and high-energy detector images was 11.5 and 16.7 keV for 120 and 140 kV scans, respectively. The differences calculated from 140 and 100 kV conventional PE images were 12.8, and 20.1 keV from 140 and 80 kV conventional PE images. The iodine CNR increased monotonically with decreased keV. Compared to conventional PE images, the peak of NPS curves from VM images were shifted to lower frequency. Conclusion The EEE results indicates that SDCT at 120 and 140 kV may have energy separation comparable to 100/140 kV and 80/140 kV dual-kV imaging. The effects of IR on CNR and NPS require further investigation for SDCT. Author YY and AD are Philips Healthcare employees.« less

  3. Measuring sensitivity to viewpoint change with and without stereoscopic cues.

    PubMed

    Bell, Jason; Dickinson, Edwin; Badcock, David R; Kingdom, Frederick A A

    2013-12-04

    The speed and accuracy of object recognition is compromised by a change in viewpoint; demonstrating that human observers are sensitive to this transformation. Here we discuss a novel method for simulating the appearance of an object that has undergone a rotation-in-depth, and include an exposition of the differences between perspective and orthographic projections. Next we describe a method by which human sensitivity to rotation-in-depth can be measured. Finally we discuss an apparatus for creating a vivid percept of a 3-dimensional rotation-in-depth; the Wheatstone Eight Mirror Stereoscope. By doing so, we reveal a means by which to evaluate the role of stereoscopic cues in the discrimination of viewpoint rotated shapes and objects.

  4. The Virtual Pelvic Floor, a tele-immersive educational environment.

    PubMed Central

    Pearl, R. K.; Evenhouse, R.; Rasmussen, M.; Dech, F.; Silverstein, J. C.; Prokasy, S.; Panko, W. B.

    1999-01-01

    This paper describes the development of the Virtual Pelvic Floor, a new method of teaching the complex anatomy of the pelvic region utilizing virtual reality and advanced networking technology. Virtual reality technology allows improved visualization of three-dimensional structures over conventional media because it supports stereo vision, viewer-centered perspective, large angles of view, and interactivity. Two or more ImmersaDesk systems, drafting table format virtual reality displays, are networked together providing an environment where teacher and students share a high quality three-dimensional anatomical model, and are able to converse, see each other, and to point in three dimensions to indicate areas of interest. This project was realized by the teamwork of surgeons, medical artists and sculptors, computer scientists, and computer visualization experts. It demonstrates the future of virtual reality for surgical education and applications for the Next Generation Internet. Images Figure 1 Figure 2 Figure 3 PMID:10566378

  5. Transforming Clinical Imaging Data for Virtual Reality Learning Objects

    ERIC Educational Resources Information Center

    Trelease, Robert B.; Rosset, Antoine

    2008-01-01

    Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…

  6. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3DMIP platform when a larger number of cores is available.

  7. Effect of familiarity and viewpoint on face recognition in chimpanzees

    PubMed Central

    Parr, Lisa A; Siebert, Erin; Taubert, Jessica

    2012-01-01

    Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions. PMID:22128558

  8. Can we use virtual reality tools in the planning of an experiment?

    NASA Astrophysics Data System (ADS)

    Kucaba-Pietal, Anna; Szumski, Marek; Szczerba, Piotr

    2015-03-01

    Virtual reality (VR) has proved to be a particularly useful tool in engineering and design. A related area of aviation in which VR is particularly significant is a flight training, as it requires many hours of practice and using real planes for all training is both expensive and more dangerous. Research conducted at the Rzeszow University of Technology (RUT) showed that virtual reality can be successfully used for planning experiment during a flight tests. Motivation to the study were a wing deformation measurements of PW-6 glider in flight by use Image Pattern Correlation Technique (IPCT) planned within the frame of AIM2 project. The tool VirlIPCT was constructed, which permits to perform virtual IPCT setup on an airplane. Using it, we can test a camera position, camera resolution, pattern application. Moreover performed tests on RUT indicate, that VirlIPCT can be used as a virtual IPCT image generator. This paper presents results of the research on VirlIPCT.

  9. Production of the next-generation library virtual tour.

    PubMed

    Duncan, J M; Roth, L K

    2001-10-01

    While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree "virtual reality" views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project.

  10. LHCb experience with running jobs in virtual machines

    NASA Astrophysics Data System (ADS)

    McNab, A.; Stagni, F.; Luzzi, C.

    2015-12-01

    The LHCb experiment has been running production jobs in virtual machines since 2013 as part of its DIRAC-based infrastructure. We describe the architecture of these virtual machines and the steps taken to replicate the WLCG worker node environment expected by user and production jobs. This relies on the uCernVM system for providing root images for virtual machines. We use the CernVM-FS distributed filesystem to supply the root partition files, the LHCb software stack, and the bootstrapping scripts necessary to configure the virtual machines for us. Using this approach, we have been able to minimise the amount of contextualisation which must be provided by the virtual machine managers. We explain the process by which the virtual machine is able to receive payload jobs submitted to DIRAC by users and production managers, and how this differs from payloads executed within conventional DIRAC pilot jobs on batch queue based sites. We describe our operational experiences in running production on VM based sites managed using Vcycle/OpenStack, Vac, and HTCondor Vacuum. Finally we show how our use of these resources is monitored using Ganglia and DIRAC.

  11. Virtual source reflection imaging of the Socorro Magma Body, New Mexico, using a dense seismic array

    NASA Astrophysics Data System (ADS)

    Finlay, T. S.; Worthington, L. L.; Schmandt, B.; Hansen, S. M.; Bilek, S. L.; Aster, R. C.; Ranasinghe, N. R.

    2017-12-01

    The Socorro Magma Body (SMB) is one of the largest known actively inflating continental magmatic intrusions. Previous studies have relied on sparse instrument coverage to determine its spatial extent, depth, and seismic signature, which characterized the body as a thin sill with a surface at 19 km below the Earth's surface. However, over the last two decades, InSAR and magneto-telluric (MT) studies have shed new light on the SMB and invigorated the scientific debate of the spatial distribution and uplift rate of the SMB. We return to seismic imaging of the SMB with the Sevilleta Array, a 12-day deployment of approximately 800 vertical component, 10-Hz geophones north of Socorro, New Mexico above and around the estimated northern half of the SMB. Teleseismic virtual source reflection profiling (TVR) employs the free surface reflection off of a teleseismic P as a virtual source in dense arrays, and has been used successfully to image basin structure and the Moho in multiple tectonic environments. The Sevilleta Array recorded 62 teleseismic events greater than M5. Applying TVR to the data collected by the Sevilleta Array, we present stacks from four events that produced the with high signal-to-noise ratios and simple source-time functions: the February 11, 2015 M6.7 in northern Argentina, the February 19, 2015 M5.4 in Kamchatka, Russia, and the February 21, 2015 M5.1 and February 22, 2015 M5.5 in western Colombia. Preliminary results suggest eastward-dipping reflectors at approximately 5 km depth near the Sierra Ladrones range in the northwestern corner of the array. Further analysis will focus on creating profiles across the area of maximum SMB uplift and constraining basin geometry.

  12. Virtual Sonography Through the Internet: Volume Compression Issues

    PubMed Central

    Vilarchao-Cavia, Joseba; Troyano-Luque, Juan-Mario; Clavijo, Matilde

    2001-01-01

    Background Three-dimensional ultrasound images allow virtual sonography even at a distance. However, the size of final 3-D files limits their transmission through slow networks such as the Internet. Objective To analyze compression techniques that transform ultrasound images into small 3-D volumes that can be transmitted through the Internet without loss of relevant medical information. Methods Samples were selected from ultrasound examinations performed during, 1999-2000, in the Obstetrics and Gynecology Department at the University Hospital in La Laguna, Canary Islands, Spain. The conventional ultrasound video output was recorded at 25 fps (frames per second) on a PC, producing 100- to 120-MB files (for from 500 to 550 frames). Processing to obtain 3-D images progressively reduced file size. Results The original frames passed through different compression stages: selecting the region of interest, rendering techniques, and compression for storage. Final 3-D volumes reached 1:25 compression rates (1.5- to 2-MB files). Those volumes need 7 to 8 minutes to be transmitted through the Internet at a mean data throughput of 6.6 Kbytes per second. At the receiving site, virtual sonography is possible using orthogonal projections or oblique cuts. Conclusions Modern volume-rendering techniques allowed distant virtual sonography through the Internet. This is the result of their efficient data compression that maintains its attractiveness as a main criterion for distant diagnosis. PMID:11720963

  13. Mental rotation versus invariant features in object perception from different viewpoints: an fMRI study.

    PubMed

    Vanrie, Jan; Béatse, Erik; Wagemans, Johan; Sunaert, Stefan; Van Hecke, Paul

    2002-01-01

    It has been proposed that object perception can proceed through different routes, which can be situated on a continuum ranging from complete viewpoint-dependency to complete viewpoint-independency, depending on the objects and the task at hand. Although these different routes have been extensively demonstrated on the behavioral level, the corresponding distinction in the underlying neural substrate has not received the same attention. Our goal was to disentangle, on the behavioral and the neurofunctional level, a process associated with extreme viewpoint-dependency, i.e. mental rotation, and a process associated with extreme viewpoint-independency, i.e. the use of viewpoint-invariant, diagnostic features. Two sets of 3-D block figures were created that either differed in handedness (original versus mirrored) or in the angles joining the block components (orthogonal versus skewed). Behavioral measures on a same-different judgment task were predicted to be dependent on viewpoint in the rotation condition (same versus mirrored), but not in the invariance condition (same angles versus different angles). Six subjects participated in an fMRI experiment while presented with both conditions in alternating blocks. Both reaction times and accuracy confirmed the predicted dissociation between the two conditions. Neurofunctional results indicate that all cortical areas activated in the invariance condition were also activated in the rotation condition. Parietal areas were more activated than occipito-temporal areas in the rotation condition, while this pattern was reversed in the invariance condition. Furthermore, some areas were activated uniquely by the rotation condition, probably reflecting the additional processes apparent in the behavioral response patterns.

  14. The virtual mirror: a new interaction paradigm for augmented reality environments.

    PubMed

    Bichlmeier, Christoph; Heining, Sandro Michael; Feuerstein, Marco; Navab, Nassir

    2009-09-01

    Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.

  15. Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.

    PubMed

    Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem

    2018-01-01

    Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Virtual Rover Takes its First Turn

    NASA Image and Video Library

    2004-01-13

    This image shows a screenshot from the software used by engineers to drive the Mars Exploration Rover Spirit. The software simulates the rover's movements across the martian terrain, helping to plot a safe course for the rover. The virtual 3-D world around the rover is built from images taken by Spirit's stereo navigation cameras. Regions for which the rover has not yet acquired 3-D data are represented in beige. This image depicts the state of the rover before it backed up and turned 45 degrees on Sol 11 (01-13-04). http://photojournal.jpl.nasa.gov/catalog/PIA05063

  17. Orientation Encoding and Viewpoint Invariance in Face Recognition: Inferring Neural Properties from Large-Scale Signals.

    PubMed

    Ramírez, Fernando M

    2018-05-01

    Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.

  18. Mirror-image-induced magnetic modes.

    PubMed

    Xifré-Pérez, Elisabet; Shi, Lei; Tuzer, Umut; Fenollosa, Roberto; Ramiro-Manzano, Fernando; Quidant, Romain; Meseguer, Francisco

    2013-01-22

    Reflection in a mirror changes the handedness of the real world, and right-handed objects turn left-handed and vice versa (M. Gardner, The Ambidextrous Universe, Penguin Books, 1964). Also, we learn from electromagnetism textbooks that a flat metallic mirror transforms an electric charge into a virtual opposite charge. Consequently, the mirror image of a magnet is another parallel virtual magnet as the mirror image changes both the charge sign and the curl handedness. Here we report the dramatic modification in the optical response of a silicon nanocavity induced by the interaction with its image through a flat metallic mirror. The system of real and virtual dipoles can be interpreted as an effective magnetic dipole responsible for a strong enhancement of the cavity scattering cross section.

  19. Automated virtual colonoscopy

    NASA Astrophysics Data System (ADS)

    Hunt, Gordon W.; Hemler, Paul F.; Vining, David J.

    1997-05-01

    Virtual colonscopy (VC) is a minimally invasive alternative to conventional fiberoptic endoscopy for colorectal cancer screening. The VC technique involves bowel cleansing, gas distension of the colon, spiral computed tomography (CT) scanning of a patient's abdomen and pelvis, and visual analysis of multiplanar 2D and 3D images created from the spiral CT data. Despite the ability of interactive computer graphics to assist a physician in visualizing 3D models of the colon, a correct diagnosis hinges upon a physician's ability to properly identify small and sometimes subtle polyps or masses within hundreds of multiplanar and 3D images. Human visual analysis is time-consuming, tedious, and often prone to error of interpretation.We have addressed the problem of visual analysis by creating a software system that automatically highlights potential lesions in the 2D and 3D images in order to expedite a physician's interpretation of the colon data.

  20. Technology transfer from the viewpoint of a NASA prime contractor

    NASA Technical Reports Server (NTRS)

    Dyer, Gordon

    1992-01-01

    Viewgraphs on technology transfer from the viewpoint of a NASA prime contractor are provided. Technology Transfer Program for Manned Space Systems and the Technology Transfer Program status are addressed.

  1. [Virtual CT-pneumocystoscopy: indications, advantages and limitations. Our experience].

    PubMed

    Regine, Giovanni; Atzori, Maurizio; Buffa, Vitaliano; Miele, Vittorio; Ialongo, Pasquale; Adami, Loredana

    2003-09-01

    The use of CT volume-rendering techniques allows the evaluation of visceral organs without the need for endoscopy. Conventional endoscopic evaluation of the bladder is limited by the invasiveness of the technique and the difficulty exploring the entire bladder. Virtual evaluation of the bladder by three-dimensional CT reconstruction offers potential advantages and can be used in place of endoscopy. This study investigates the sensitivity of virtual CT in assessing lesion of the bladder wall to compare it with that of conventional endoscopy, and outlines the indications, advantages and disadvantages of virtual CT-pneumocystography. Between September 2001 and May 2002, 21 patients with haematuria and positive cystoscopic findings were studied. After an initial assessment by ultrasound, the patients underwent pelvic CT as a single volumetric scan after preliminary air distension of the bladder by means of 12 F Foley catheter. The images were processed on an independent workstation (Advantage 3.0 GE) running dedicated software for endoluminal navigation. The lesions detected by endoscopy were classified as sessile or pedunculated, and according to size (more or less than 5 mm). Finally, the results obtained at virtual cystoscopy were evaluated by two radiologists blinded to the conventional cystoscopy results. Thirty lesions (24 pedunculated, 6 sessile) were detected at conventional cystoscopy in 16 patients (multiple polyposis in 3 cases). Virtual cystoscopy identified 23 lesions (19 pedunculated and 4 sessile). The undetected lesions were pedunculated <5 mm (5 cases) and sessile (2 cases). One correctly identified pedunculated lesion was associated with a bladder stone. Good quality virtual images were obtained in all of the patients. In only one patient with multiple polyposis the quality of the virtual endoscopic evaluation was limited by the patient's intolerance to bladder distension, although identification of the lesions was not compromised. The overall

  2. [Development of a software for 3D virtual phantom design].

    PubMed

    Zou, Lian; Xie, Zhao; Wu, Qi

    2014-02-01

    In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.

  3. The Virtual Climate Data Server (vCDS): An iRODS-Based Data Management Software Appliance Supporting Climate Data Services and Virtualization-as-a-Service in the NASA Center for Climate Simulation

    NASA Technical Reports Server (NTRS)

    Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.

    2012-01-01

    Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.

  4. Virtual autopsy using imaging: bridging radiologic and forensic sciences. A review of the Virtopsy and similar projects.

    PubMed

    Bolliger, Stephan A; Thali, Michael J; Ross, Steffen; Buck, Ursula; Naether, Silvio; Vock, Peter

    2008-02-01

    The transdisciplinary research project Virtopsy is dedicated to implementing modern imaging techniques into forensic medicine and pathology in order to augment current examination techniques or even to offer alternative methods. Our project relies on three pillars: three-dimensional (3D) surface scanning for the documentation of body surfaces, and both multislice computed tomography (MSCT) and magnetic resonance imaging (MRI) to visualise the internal body. Three-dimensional surface scanning has delivered remarkable results in the past in the 3D documentation of patterned injuries and of objects of forensic interest as well as whole crime scenes. Imaging of the interior of corpses is performed using MSCT and/or MRI. MRI, in addition, is also well suited to the examination of surviving victims of assault, especially choking, and helps visualise internal injuries not seen at external examination of the victim. Apart from the accuracy and three-dimensionality that conventional documentations lack, these techniques allow for the re-examination of the corpse and the crime scene even decades later, after burial of the corpse and liberation of the crime scene. We believe that this virtual, non-invasive or minimally invasive approach will improve forensic medicine in the near future.

  5. Development of a teledermatopathology consultation system using virtual slides

    PubMed Central

    2012-01-01

    Background An online consultation system using virtual slides (whole slide images; WSI) has been developed for pathological diagnosis, and could help compensate for the shortage of pathologists, especially in the field of dermatopathology and in other fields dealing with difficult cases. This study focused on the performance and future potential of the system. Method In our system, histological specimens on slide glasses are digitalized by a virtual slide instrument, converted into web data, and up-loaded to an open server. Using our own purpose-built online system, we then input patient details such as age, gender, affected region, clinical data, past history and other related items. We next select up to ten consultants. Finally we send an e-mail to all consultants simultaneously through a single command. The consultant receives an e-mail containing an ID and password which is used to access the open server and inspect the images and other data associated with the case. The consultant makes a diagnosis, which is sent to us along with comments. Because this was a pilot study, we also conducted several questionnaires with consultants concerning the quality of images, operability, usability, and other issues. Results We solicited consultations for 36 cases, including cases of tumor, and involving one to eight consultants in the field of dermatopathology. No problems were noted concerning the images or the functioning of the system on the sender or receiver sides. The quickest diagnosis was received only 18 minutes after sending our data. This is much faster than in conventional consultation using glass slides. There were no major problems relating to the diagnosis, although there were some minor differences of opinion between consultants. The results of questionnaires answered by many consultants confirmed the usability of this system for pathological consultation. (16 out of 23 consultants.) Conclusion We have developed a novel teledermatopathological consultation

  6. Virtual reality and telerobotics applications of an Address Recalculation Pipeline

    NASA Technical Reports Server (NTRS)

    Regan, Matthew; Pose, Ronald

    1994-01-01

    The technology described in this paper was designed to reduce latency to user interactions in immersive virtual reality environments. It is also ideally suited to telerobotic applications such as interaction with remote robotic manipulators in space or in deep sea operations. in such circumstances the significant latency is observed response to user stimulus which is due to communications delays, and the disturbing jerkiness due to low and unpredictable frame rates on compressed video user feedback or computationally limited virtual worlds, can be masked by our techniques. The user is provided with highly responsive visual feedback independent of communication or computational delays in providing physical video feedback or in rendering virtual world images. Virtual and physical environments can be combined seamlessly using these techniques.

  7. Application of MR virtual endoscopy in children with hydrocephalus.

    PubMed

    Zhao, Cailei; Yang, Jian; Gan, Yungen; Liu, Jiangang; Tan, Zhen; Liang, Guohua; Meng, Xianlei; Sun, Longwei; Cao, Weiguo

    2015-12-01

    To evaluate the performance of MR virtual endoscopy (MRVE) in children with hydrocephalus. Clinical and imaging data were collected from 15 pediatric patients with hydrocephalus and 15 normal control children. All hydrocephalus patients were confirmed by ventriculoscopy or CT imaging. The cranial 3D-T1 weighted imaging data from fast spoiled gradient echo scan (FSPGR) were transported to working station. VE images of cerebral ventricular cavity were constructed with Navigator software. Cerebral ventricular MRVE can achieve similar results as ventriculoscopy in demonstrating the morphology of ventricular wall or intracavity lesion. In addition, MRVE can observe the lesion from distal end of obstruction, as well as other areas that are inaccessible to ventriculoscopy. MRVE can also reveal the pathological change of ventricular inner wall surface, and help determine patency of the cerebral aqueduct and fourth ventricle outlet. MR virtual endoscopy provides a non-invasive diagnostic modality that can be used as a supplemental approach to ventriculoscopy. However, its sensitivity and specificity need to be determined in the large study. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Virtual Astronomy: The Legacy of the Virtual Astronomical Observatory

    NASA Astrophysics Data System (ADS)

    Hanisch, Robert J.; Berriman, G. B.; Lazio, J.; Szalay, A. S.; Fabbiano, G.; Plante, R. L.; McGlynn, T. A.; Evans, J.; Emery Bunn, S.; Claro, M.; VAO Project Team

    2014-01-01

    Over the past ten years, the Virtual Astronomical Observatory (VAO, http://usvao.org) and its predecessor, the National Virtual Observatory (NVO), have developed and operated a software infrastructure consisting of standards and protocols for data and science software applications. The Virtual Observatory (VO) makes it possible to develop robust software for the discovery, access, and analysis of astronomical data. Every major publicly funded research organization in the US and worldwide has deployed at least some components of the VO infrastructure; tens of thousands of VO-enabled queries for data are invoked daily against catalog, image, and spectral data collections; and groups within the community have developed tools and applications building upon the VO infrastructure. Further, NVO and VAO have helped ensure access to data internationally by co-founding the International Virtual Observatory Alliance (IVOA, http://ivoa.net). The products of the VAO are being archived in a publicly accessible repository. Several science tools developed by the VAO will continue to be supported by the organizations that developed them: the Iris spectral energy distribution package (SAO), the Data Discovery Tool (STScI/MAST, HEASARC), and the scalable cross-comparison service (IPAC). The final year of VAO is focused on development of the data access protocol for data cubes, creation of Python language bindings to VO services, and deployment of a cloud-like data storage service that links to VO data discovery tools (SciDrive). We encourage the community to make use of these tools and services, to extend and improve them, and to carry on with the vision for virtual astronomy: astronomical research enabled by easy access to distributed data and computational resources. Funding for VAO development and operations has been provided jointly by NSF and NASA since May 2010. NSF funding will end in September 2014, though with the possibility of competitive solicitations for VO-based tool

  9. Production of the next-generation library virtual tour

    PubMed Central

    Duncan, James M.; Roth, Linda K.

    2001-01-01

    While many libraries offer overviews of their services through their Websites, only a small number of health sciences libraries provide Web-based virtual tours. These tours typically feature photographs of major service areas along with textual descriptions. This article describes the process for planning, producing, and implementing a next-generation virtual tour in which a variety of media elements are integrated: photographic images, 360-degree “virtual reality” views, textual descriptions, and contextual floor plans. Hardware and software tools used in the project are detailed, along with a production timeline and budget, tips for streamlining the process, and techniques for improving production. This paper is intended as a starting guide for other libraries considering an investment in such a project. PMID:11837254

  10. Educational utility of advanced three-dimensional virtual imaging in evaluating the anatomical configuration of the frontal recess.

    PubMed

    Agbetoba, Abib; Luong, Amber; Siow, Jin Keat; Senior, Brent; Callejas, Claudio; Szczygielski, Kornel; Citardi, Martin J

    2017-02-01

    Endoscopic sinus surgery represents a cornerstone in the professional development of otorhinolaryngology trainees. Mastery of these surgical skills requires an understanding of paranasal sinus and skull-base anatomy. The frontal sinus is associated with a wide range of variation and complex anatomical configuration, and thus represents an important challenge for all trainees performing endoscopic sinus surgery. Forty-five otorhinolaryngology trainees and 20 medical school students from 5 academic institutions were enrolled and randomized into 1 of 2 groups. Each subject underwent learning of frontal recess anatomy with both traditional 2-dimensional (2D) learning methods using a standard Digital Imaging and Communications in Medicine (DICOM) viewing software (RadiAnt Dicom Viewer Version 1.9.16) and 3-dimensional (3D) learning utilizing a novel preoperative virtual planning software (Scopis Building Blocks), with one half learning with the 2D method first and the other half learning with the 3D method first. Four questionnaires that included a total of 20 items were scored for subjects' self-assessment on knowledge of frontal recess and frontal sinus drainage pathway anatomy following each learned modality. A 2-sample Wilcoxon rank-sum test was used in the statistical analysis comparing the 2 groups. Most trainees (89%) believed that the virtual 3D planning software significantly improved their understanding of the spatial orientation of the frontal sinus drainage pathway. Incorporation of virtual 3D planning surgical software may help augment trainees' understanding and spatial orientation of the frontal recess and sinus anatomy. The potential increase in trainee proficiency and comprehension theoretically may translate to improved surgical skill and patient outcomes and in reduced surgical time. © 2016 ARS-AAOA, LLC.

  11. WE-FG-207B-09: Experimental Assessment of Noise and Spatial Resolution in Virtual Non-Contrast Dual-Energy CT Images Across Multiple Patient Sizes and CT Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Montoya, J; Ferrero, A; Yu, L

    Purpose: To investigate the noise and spatial resolution properties of virtual non-contrast (VNC) dual-energy CT images compared to true non-contrast (TNC) images across multiple patient sizes and CT systems. Methods: Torso-shaped water phantoms with lateral widths of 25, 30, 35, 40 and 45 cm and a high resolution bar pattern phantom (Catphan CTP528) were scanned using 2nd and 3rd generation dual-source CT systems (Scanner A: Somatom Definition Flash, Scanner B: Somatom Force, Siemens Healthcare) in dual-energy scan mode with the same radiation dose for a given phantom size. Tube potentials of 80/Sn140 and 100/Sn140 on Scanner A and 80/Sn150, 90/Sn150more » and 100/Sn150 on Scanner B were evaluated to examine the impact of spectral separation. Images were reconstructed using a medium sharp quantitative kernel (Qr40), 1.0-mm thickness, 1.0-mm interval and 20 cm field of view. Mixed images served as TNC images. VNC images were created using commercial software (Virtual Unenhanced, Syngo VIA Version VA30, Siemens Healthcare). The noise power spectrum (NPS), area under the NPS, peak frequency of the NPS and image noise were measured for every phantom size and tube potential combination in TNC and VNC images. Results were compared within and between CT systems. Results: Minimal shift in NPS peak frequencies was observed in VNC images compared to TNC for NPS having pronounced peaks. Image noise and area under the NPS were higher in VNC images compared to TNC images across all tube potentials and for scanner A compared to scanner B. Limiting spatial resolution was deemed to be identical between VNC and TNC images. Conclusion: Quantitative assessment of image quality in VNC images demonstrated higher noise but equivalent spatial resolution compared to TNC images. Decreased noise was observed in the 3rd generation dual-source CT system for tube potential pairs having greater spectral separation. Dr. McCollough receives research support from Siemens Healthcare.« less

  12. The Use of Virtual Reality Tools in the Reading-Language Arts Classroom

    ERIC Educational Resources Information Center

    Pilgrim, J. Michael; Pilgrim, Jodi

    2016-01-01

    This article presents virtual reality as a tool for classroom literacy instruction. Building on the traditional use of images as a way to scaffold prior knowledge, we extend this idea to share ways virtual reality enables experiential learning through field trip-like experiences. The use of technology tools such Google Street view, Google…

  13. Imaging Basin Structure with Teleseismic Virtual Source Reflection Profiles

    NASA Astrophysics Data System (ADS)

    Yang, Z.; Sheehan, A. F.; Yeck, W. L.; Miller, K. C.; Worthington, L. L.; Erslev, E.; Harder, S. H.; Anderson, M. L.; Siddoway, C. S.

    2011-12-01

    We demonstrate a case of using teleseisms recorded on single channel high frequency geophones to image upper crustal structure across the Bighorn Arch in north-central Wyoming. The dataset was obtained through the EarthScope FlexArray Bighorn Arch Seismic Experiment (BASE). In addition to traditional active and passive source seismic data acquisition, BASE included a 12 day continuous (passive source) deployment of 850 geophones with 'Texan' dataloggers. The geophones were deployed in three E-W lines in north-central Wyoming extending from the Powder River Basin across the Bighorn Mountains and across the Bighorn Basin, and two N-S lines on east and west flanks of the Bighorn Mountains. The station interval is roughly 1.5-2 km, good for imaging coherent shallow structures. The approach used in this study uses the surface reflection as virtual seismic source and reverberated teleseismic P-wave phase (PpPdp) (teleseismic P-wave reflected at receiver side free surface and then reflected off crustal seismic interface) to construct seismic profiles. These profiles are equivalent to conventional active source seismic reflection profiles except that high-frequency (up to 2.4 Hz) transmitted wave fields from distant earthquakes are used as sources. On the constructed seismic profiles, the coherent PpPdp phases beneath Powder River and Bighorn Basins are distinct after the source wavelet is removed from the seismograms by deconvolution. Under the Bighorn Arch, no clear coherent signals are observed. We combine phases PpPdp and Ps to constrain the averaged Vp/Vs: 2.05-2.15 for the Powder River Basin and 1.9-2.0 for the Bighorn Basin. These high Vp/Vs ratios suggest that the layers within which P-wave reverberates are sedimentary. Assuming Vp as 4 km/s under the Powder River Basin, the estimated thickness of sedimentary layer above reflection below the profile is 3-4.5 km, consistent with the depth of the top of the Tensleep Fm. Therefore we interpret the coherent Pp

  14. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal

    PubMed Central

    Corballis, Michael C.

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia. PMID:29706878

  15. Mirror-Image Equivalence and Interhemispheric Mirror-Image Reversal.

    PubMed

    Corballis, Michael C

    2018-01-01

    Mirror-image confusions are common, especially in children and in some cases of neurological impairment. They can be a special impediment in activities such as reading and writing directional scripts, where mirror-image patterns (such as b and d ) must be distinguished. Treating mirror images as equivalent, though, can also be adaptive in the natural world, which carries no systematic left-right bias and where the same object or event can appear in opposite viewpoints. Mirror-image equivalence and confusion are natural consequences of a bilaterally symmetrical brain. In the course of learning, mirror-image equivalence may be established through a process of symmetrization, achieved through homotopic interhemispheric exchange in the formation of memory circuits. Such circuits would not distinguish between mirror images. Learning to discriminate mirror-image discriminations may depend either on existing brain asymmetries, or on extensive learning overriding the symmetrization process. The balance between mirror-image equivalence and mirror-image discrimination may nevertheless be precarious, with spontaneous confusions or reversals, such as mirror writing, sometimes appearing naturally or as a manifestation of conditions like dyslexia.

  16. The eyes prefer real images

    NASA Technical Reports Server (NTRS)

    Roscoe, Stanley N.

    1989-01-01

    For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.

  17. Effectiveness of cognitive behavioral therapy supported by virtual reality in the treatment of body image in eating disorders: one year follow-up.

    PubMed

    Marco, José H; Perpiñá, Conxa; Botella, Cristina

    2013-10-30

    Body image disturbance is a significant maintenance and prognosis factor in eating disorders. Hence, existing eating disorder treatments can benefit from direct intervention in patients' body image. No controlled studies have yet compared eating disorder treatments with and without a treatment component centered on body image. This paper includes a controlled study comparing Cognitive Behavioral Treatment (CBT) for eating disorders with and without a component for body image treatment using Virtual Reality techniques. Thirty-four participants diagnosed with eating disorders were evaluated and treated. The clinical improvement was analyzed from statistical and clinical points of view. Results showed that the patients who received the component for body image treatment improved more than the group without this component. Furthermore, improvement was maintained in post-treatment and at one year follow-up. The results reveal the advantage of including a treatment component addressing body image disturbances in the protocol for general treatment of eating disorders. The implications and limitations of these results are discussed below. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. Creation of virtual patients from CT images of cadavers to enhance integration of clinical and basic science student learning in anatomy.

    PubMed

    Jacobson, Stanley; Epstein, Scott K; Albright, Susan; Ochieng, Joseph; Griffiths, Jeffrey; Coppersmith, Veronica; Polak, Joseph F

    2009-08-01

    The goal of this study was to determine whether computerized tomographic (CT) images of cadavers could be used in addition to images from patients to develop virtual patients (VPs) to enhance integrated learning of basic and clinical science. We imaged 13 cadavers on a Siemens CT system. The DICOM images from the CT were noted to be of high quality by a radiologist who systematically identified all abnormal and pathological findings. The pathological findings from the CT images and the cause of death were used to develop plausible clinical cases and study questions. Each case was designed to highlight and explain the abnormal anatomic findings encountered during the cadaveric dissection. A 3D reconstruction was produced using OsiriX and then formatted into a QuickTime movie which was then stored on the Tufts University Sciences Knowledgebase (TUSK) as a VP. We conclude that CT scanning of cadavers produces high-quality images that can be used to develop VPs. Although the use of the VPs was optional and fewer than half of the students had an imaged cadaver for dissection, 59 of the 172 (34%) students accessed and reviewed the cases and images positively and were very encouraging for us to continue.

  19. Re-identification of persons in multi-camera surveillance under varying viewpoints and illumination

    NASA Astrophysics Data System (ADS)

    Bouma, Henri; Borsboom, Sander; den Hollander, Richard J. M.; Landsmeer, Sander H.; Worring, Marcel

    2012-06-01

    The capability to track individuals in CCTV cameras is important for surveillance and forensics alike. However, it is laborious to do over multiple cameras. Therefore, an automated system is desirable. In literature several methods have been proposed, but their robustness against varying viewpoints and illumination is limited. Hence performance in realistic settings is also limited. In this paper, we present a novel method for the automatic re-identification of persons in video from surveillance cameras in a realistic setting. The method is computationally efficient, robust to a wide variety of viewpoints and illumination, simple to implement and it requires no training. We compare the performance of our method to several state-of-the-art methods on a publically available dataset that contains the variety of viewpoints and illumination to allow benchmarking. The results indicate that our method shows good performance and enables a human operator to track persons five times faster.

  20. Virtual reality simulators for gastrointestinal endoscopy training

    PubMed Central

    Triantafyllou, Konstantinos; Lazaridis, Lazaros Dimitrios; Dimitriadis, George D

    2014-01-01

    The use of simulators as educational tools for medical procedures is spreading rapidly and many efforts have been made for their implementation in gastrointestinal endoscopy training. Endoscopy simulation training has been suggested for ascertaining patient safety while positively influencing the trainees’ learning curve. Virtual simulators are the most promising tool among all available types of simulators. These integrated modalities offer a human-like endoscopy experience by combining virtual images of the gastrointestinal tract and haptic realism with using a customized endoscope. From their first steps in the 1980s until today, research involving virtual endoscopic simulators can be divided in two categories: investigation of the impact of virtual simulator training in acquiring endoscopy skills and measuring competence. Emphasis should also be given to the financial impact of their implementation in endoscopy, including the cost of these state-of-the-art simulators and the potential economic benefits from their usage. Advances in technology will contribute to the upgrade of existing models and the development of new ones; while further research should be carried out to discover new fields of application. PMID:24527175

  1. Virtual Reality and the Virtual Library.

    ERIC Educational Resources Information Center

    Oppenheim, Charles

    1993-01-01

    Explains virtual reality, including proper and improper uses of the term, and suggests ways that libraries might be affected by it. Highlights include elements of virtual reality systems; possible virtual reality applications, including architecture, the chemical industry, transport planning, armed forces, and entertainment; and the virtual…

  2. The Story-Presenting Method: a Method for Constructing Multiple Viewpoints to Understand Different Cultures.

    PubMed

    Watanabe, Tadaharu

    2017-09-01

    This study will show the results of four dialogical cultural exchange classes, which were held between Japanese and Chinese high school students, and examine the shifts in students' viewpoints and changes in cultural understandings that occurred during those classes. In the first cultural exchange class, students of both countries read a story which described an older student who carelessly wore a T-shirt inside out, and younger students passed by without greeting him. Students of both countries were then asked to write their comments about it. From the second to the fourth class, students discussed the story with each other through exchanging their comments. By presenting another story, which introduced the viewpoint of a third person, and asking them questions that allowed them to reflect on their lives, students also experienced four different viewpoints during these cultural exchange classes. At the beginning of the cultural exchange, students of both countries tended to focus on the similarities in each other's comments, which led to the closing down of the discussion. However, through discussions and experiencing the four different viewpoints, they found there are some essential differences between them around 'ways of greeting' and 'hierarchical relationships between older and younger students', which motivated them to understand their counterparts' culture. Moreover, in the last comments of these cultural exchange classes, it was found that they acquired the viewpoints of cultural others. Given the results of these classes, it is shown that it is effective to present various stories to stimulate cultural understanding.

  3. VIRTOPSY - the Swiss virtual autopsy approach.

    PubMed

    Thali, Michael J; Jackowski, Christian; Oesterhelweg, Lars; Ross, Steffen G; Dirnhofer, Richard

    2007-03-01

    The aim of the VIRTOPSY project () is utilizing radiological scanning to push low-tech documentation and autopsy procedures in a world of high-tech medicine in order to improve scientific value, to increase significance and quality in the forensic field. The term VIRTOPSY was created from the terms virtual and autopsy: Virtual is derived from the Latin word 'virtus', which means 'useful, efficient and good'. Autopsy is a combination of the old Greek terms 'autos' (=self) and 'opsomei' (=I will see). Thus autopsy means 'to see with ones own eyes'. Because our goal was to eliminate the subjectivity of "autos", we merged the two terms virtual and autopsy - deleting "autos" - to create VIRTOPSY. Today the project VIRTOPSY combining the research topics under one scientific umbrella, is characterized by a trans-disciplinary research approach that combines Forensic Medicine, Pathology, Radiology, Image Processing, Physics, and Biomechanics to an international scientific network. The paper will give an overview of the Virtopsy change process in forensic medicine.

  4. Image understanding in terms of semiotics

    NASA Astrophysics Data System (ADS)

    Zakharko, E.; Kaminsky, Roman M.; Shpytko, V.

    1995-06-01

    Human perception of pictorial visual information is investigated from iconical sign view-point and appropriate semiotical model is discussed. Image construction (syntactics) is analyzed as a complex hierarchical system and various types of pictorial objects, their relations, regular configurations are represented, studied, and modeled. Relations between image syntactics, its semantics, and pragmatics is investigated. Research results application to the problems of thematic interpretation of Earth surface remote imgages is illustrated.

  5. Opportunities in Participatory Science and Citizen Science with MRO's High Resolution Imaging Science Experiment: A Virtual Science Team Experience

    NASA Astrophysics Data System (ADS)

    Gulick, Ginny

    2009-09-01

    We report on the accomplishments of the HiRISE EPO program over the last two and a half years of science operations. We have focused primarily on delivering high impact science opportunities through our various participatory science and citizen science websites. Uniquely, we have invited students from around the world to become virtual HiRISE team members by submitting target suggestions via our HiRISE Quest Image challenges using HiWeb the team's image suggestion facility web tools. When images are acquired, students analyze their returned images, write a report and work with a HiRISE team member to write a image caption for release on the HiRISE website (http://hirise.lpl.arizona.edu). Another E/PO highlight has been our citizen scientist effort, HiRISE Clickworkers (http://clickworkers.arc.nasa.gov/hirise). Clickworkers enlists volunteers to identify geologic features (e.g., dunes, craters, wind streaks, gullies, etc.) in the HiRISE images and help generate searchable image databases. In addition, the large image sizes and incredible spatial resolution of the HiRISE camera can tax the capabilities of the most capable computers, so we have also focused on enabling typical users to browse, pan and zoom the HiRISE images using our HiRISE online image viewer (http://marsoweb.nas.nasa.gov/HiRISE/hirise_images/). Our educational materials available on the HiRISE EPO web site (http://hirise.seti.org/epo) include an assortment of K through college level, standards-based activity books, a K through 3 coloring/story book, a middle school level comic book, and several interactive educational games, including Mars jigsaw puzzles, crosswords, word searches and flash cards.

  6. Sensorimotor Training in Virtual Reality: A Review

    PubMed Central

    Adamovich, Sergei V.; Fluet, Gerard G.; Tunik, Eugene; Merians, Alma S.

    2010-01-01

    Recent experimental evidence suggests that rapid advancement of virtual reality (VR) technologies has great potential for the development of novel strategies for sensorimotor training in neurorehabilitation. We discuss what the adaptive and engaging virtual environments can provide for massive and intensive sensorimotor stimulation needed to induce brain reorganization. Second, discrepancies between the veridical and virtual feedback can be introduced in VR to facilitate activation of targeted brain networks, which in turn can potentially speed up the recovery process. Here we review the existing experimental evidence regarding the beneficial effects of training in virtual environments on the recovery of function in the areas of gait, upper extremity function and balance, in various patient populations. We also discuss possible mechanisms underlying these effects. We feel that future research in the area of virtual rehabilitation should follow several important paths. Imaging studies to evaluate the effects of sensory manipulation on brain activation patterns and the effect of various training parameters on long term changes in brain function are needed to guide future clinical inquiry. Larger clinical studies are also needed to establish the efficacy of sensorimotor rehabilitation using VR approaches in various clinical populations and most importantly, to identify VR training parameters that are associated with optimal transfer into real-world functional improvements. PMID:19713617

  7. Locally linear regression for pose-invariant face recognition.

    PubMed

    Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen

    2007-07-01

    The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.

  8. SAKURA-viewer: intelligent order history viewer based on two-viewpoint architecture.

    PubMed

    Toyoda, Shuichi; Niki, Noboru; Nishitani, Hiromu

    2007-03-01

    We propose a new intelligent order history viewer applied to consolidating and visualizing data. SAKURA-viewer is a highly effective tool, as: 1) it visualizes both the semantic viewpoint and the temporal viewpoint of patient records simultaneously; 2) it promotes awareness of contextual information among the daily data; and 3) it implements patient-centric data entry methods. This viewer contributes to decrease the user's workload in an order entry system. This viewer is now incorporated into an order entry system being run on an experimental basis. We describe the evaluation of this system using results of a user satisfaction survey, analysis of information consolidation within the database, and analysis of the frequency of use of data entry methods.

  9. Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm

    PubMed Central

    Zhong, Jingru; Long, Yunling; Yan, Huagang; Meng, Qianqian; Zhao, Jing; Zhang, Ying; Yang, Xinjian; Li, Haiyun

    2016-01-01

    Intracranial stents are becoming increasingly a useful option in the treatment of intracranial aneurysms (IAs). Image simulation of the releasing stent configuration together with computational fluid dynamics (CFD) simulation prior to intervention will help surgeons optimize intervention scheme. This paper proposed a fast virtual stenting of IAs based on active contour model (ACM) which was able to virtually release stents within any patient-specific shaped vessel and aneurysm models built on real medical image data. In this method, an initial stent mesh was generated along the centerline of the parent artery without the need for registration between the stent contour and the vessel. Additionally, the diameter of the initial stent volumetric mesh was set to the maximum inscribed sphere diameter of the parent artery to improve the stenting accuracy and save computational cost. At last, a novel criterion for terminating virtual stent expanding that was based on the collision detection of the axis aligned bounding boxes was applied, making the stent expansion free of edge effect. The experiment results of the virtual stenting and the corresponding CFD simulations exhibited the efficacy and accuracy of the ACM based method, which are valuable to intervention scheme selection and therapy plan confirmation. PMID:26876026

  10. Building a virtual simulation platform for quasistatic breast ultrasound elastography using open source software: A preliminary investigation.

    PubMed

    Wang, Yu; Helminen, Emily; Jiang, Jingfeng

    2015-09-01

    Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time

  11. Building a virtual simulation platform for quasistatic breast ultrasound elastography using open source software: A preliminary investigation

    PubMed Central

    Wang, Yu; Helminen, Emily; Jiang, Jingfeng

    2015-01-01

    Purpose: Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. Methods: The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for

  12. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  13. Possibilities and Determinants of Using Low-Cost Devices in Virtual Education Applications

    ERIC Educational Resources Information Center

    Bun, Pawel Kazimierz; Wichniarek, Radoslaw; Górski, Filip; Grajewski, Damian; Zawadzki, Przemyslaw; Hamrol, Adam

    2017-01-01

    Virtual reality (VR) may be used as an innovative educational tool. However, in order to fully exploit its potential, it is essential to achieve the effect of immersion. To more completely submerge the user in a virtual environment, it is necessary to ensure that the user's actions are directly translated into the image generated by the…

  14. Computer Assisted Virtual Environment - CAVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  15. Computer Assisted Virtual Environment - CAVE

    ScienceCinema

    Erickson, Phillip; Podgorney, Robert; Weingartner,

    2018-05-30

    Research at the Center for Advanced Energy Studies is taking on another dimension with a 3-D device known as a Computer Assisted Virtual Environment. The CAVE uses projection to display high-end computer graphics on three walls and the floor. By wearing 3-D glasses to create depth perception and holding a wand to move and rotate images, users can delve into data.

  16. Dual-energy CT with virtual monochromatic images and metal artifact reduction software for reducing metallic dental artifacts.

    PubMed

    Cha, Jihoon; Kim, Hyung-Jin; Kim, Sung Tae; Kim, Yi Kyung; Kim, Ha Youn; Park, Gyeong Min

    2017-11-01

    Background Metallic dental prostheses may degrade image quality on head and neck computed tomography (CT). However, there is little information available on the use of dual-energy CT (DECT) and metal artifact reduction software (MARS) in the head and neck regions to reduce metallic dental artifacts. Purpose To assess the usefulness of DECT with virtual monochromatic imaging and MARS to reduce metallic dental artifacts. Material and Methods DECT was performed using fast kilovoltage (kV)-switching between 80-kV and 140-kV in 20 patients with metallic dental prostheses. CT data were reconstructed with and without MARS, and with synthesized monochromatic energy in the range of 40-140-kiloelectron volt (keV). For quantitative analysis, the artifact index of the tongue, buccal, and parotid areas was calculated for each scan. For qualitative analysis, two radiologists evaluated 70-keV and 100-keV images with and without MARS for tongue, buccal, parotid areas, and metallic denture. The locations and characteristics of the MARS-related artifacts, if any, were also recorded. Results DECT with MARS markedly reduced metallic dental artifacts and improved image quality in the buccal area ( P < 0.001) and the tongue ( P < 0.001), but not in the parotid area. The margin and internal architecture of the metallic dentures were more clearly delineated with MARS ( P < 0.001) and in the higher-energy images than in the lower-energy images ( P = 0.042). MARS-related artifacts most commonly occurred in the deep center of the neck. Conclusion DECT with MARS can reduce metallic dental artifacts and improve delineation of the metallic prosthesis and periprosthetic region.

  17. Virtual slides in peer reviewed, open access medical publication.

    PubMed

    Kayser, Klaus; Borkenfeld, Stephan; Goldmann, Torsten; Kayser, Gian

    2011-12-19

    Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of VS by the reader is high as well as by the authors. Of specific value

  18. The Planetary Virtual Observatory and Laboratory (PVOL) and its integration into the Virtual European Solar and Planetary Access (VESPA)

    NASA Astrophysics Data System (ADS)

    Hueso, R.; Juaristi, J.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Erard, S.; Cecconi, B.; Le Sidaner, Pierre

    2018-01-01

    Since 2003 the Planetary Virtual Observatory and Laboratory (PVOL) has been storing and serving publicly through its web site a large database of amateur observations of the Giant Planets (Hueso et al., 2010a). These images are used for scientific research of the atmospheric dynamics and cloud structure on these planets and constitute a powerful resource to address time variable phenomena in their atmospheres. Advances over the last decade in observation techniques, and a wider recognition by professional astronomers of the quality of amateur observations, have resulted in the need to upgrade this database. We here present major advances in the PVOL database, which has evolved into a full virtual planetary observatory encompassing also observations of Mercury, Venus, Mars, the Moon and the Galilean satellites. Besides the new objects, the images can be tagged and the database allows simple and complex searches over the data. The new web service: PVOL2 is available online in http://pvol2.ehu.eus/.

  19. Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3

    NASA Astrophysics Data System (ADS)

    Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.

    2014-12-01

    The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.

  20. Virtual reality in the treatment of body image disturbances after bariatric surgery: a clinical case.

    PubMed

    Riva, Giuseppe; Cárdenas-López, Georgina; Duran, Ximena; Torres-Villalobos, Gonzalo M; Gaggioli, Andrea

    2012-01-01

    Bariatric surgery is an operation on the stomach and/or intestines that helps patients with extreme obesity to lose weight. Even if bariatric surgery, compared with traditional obesity treatment, is more effective in reducing BMI, this approach does not achieve equal results in every patient. More, following bariatric surgery common problems are body image dissatisfaction and body disparagement: there is a significant difference between the weight loss clinicians consider successful (50% of excess weight) and the weight loss potential patients expect to achieve (at least 67% of the excess weight). The paper discusses the possible role of virtual reality (VR) in addressing this problem within an integrated treatment approach. More, the clinical case of a female bariatric patient who experienced body dissatisfaction even after a 30% body weight loss and a 62% excess body weight loss, is presented and discussed.

  1. Use of 3D techniques for virtual production

    NASA Astrophysics Data System (ADS)

    Grau, Oliver; Price, Marc C.; Thomas, Graham A.

    2000-12-01

    Virtual production for broadcast is currently mainly used in the form of virtual studios, where the resulting media is a sequence of 2D images. With the steady increase of 3D computing power in home PCs and the technical progress in 3D display technology, the content industry is looking for new kinds of program material, which makes use of 3D technology. The applications range form analysis of sport scenes, 3DTV, up to the creation of fully immersive content. In a virtual studio a camera films one or more actors in a controlled environment. The pictures of the actors can be segmented very accurately in real time using chroma keying techniques. The isolated silhouette can be integrated into a new synthetic virtual environment using a studio mixer. The resulting shape description of the actors is 2D so far. For the realization of more sophisticated optical interactions of the actors with the virtual environment, such as occlusions and shadows, an object-based 3D description of scenes is needed. However, the requirements of shape accuracy, and the kind of representation, differ in accordance with the application. This contribution gives an overview of requirements and approaches for the generation of an object-based 3D description in various applications studied by the BBC R and D department. An enhanced Virtual Studio for 3D programs is proposed that covers a range of applications for virtual production.

  2. Can, Want and Try: Parents’ Viewpoints Regarding the Participation of Their Child with an Acquired Brain Injury

    PubMed Central

    Thompson, Melanie; Elliott, Catherine; Willis, Claire; Ward, Roslyn; Falkmer, Marita; Falkmer, Torbjӧrn; Gubbay, Anna; Girdler, Sonya

    2016-01-01

    Background Acquired brain injury (ABI) is a leading cause of permanent disability, currently affecting 20,000 Australian children. Community participation is essential for childhood development and enjoyment, yet children with ABI can often experience barriers to participation. The factors which act as barriers and facilitators to community participation for children with an ABI are not well understood. Aim To identify the viewpoints of parents of children with an ABI, regarding the barriers and facilitators most pertinent to community participation for their child. Methods Using Q-method, 41 parents of children with moderate/severe ABI sorted 37 statements regarding barriers and facilitators to community participation. Factor analysis identified three viewpoints. Results This study identified three distinct viewpoints, with the perceived ability to participate decreasing with a stepwise trend from parents who felt their child and family “can” participate in viewpoint one, to “want” in viewpoint two and “try” in viewpoint three. Conclusions Findings indicated good participation outcomes for most children and families, however some families who were motivated to participate experienced significant barriers. The most significant facilitators included child motivation, supportive relationships from immediate family and friends, and supportive community attitudes. The lack of supportive relationships and attitudes was perceived as a fundamental barrier to community participation. Significance This research begins to address the paucity of information regarding those factors that impact upon the participation of children with an ABI in Australia. Findings have implications for therapists, service providers and community organisations. PMID:27367231

  3. Panoramic imaging and virtual reality — filling the gaps between the lines

    NASA Astrophysics Data System (ADS)

    Chapman, David; Deacon, Andrew

    Close range photogrammetry projects rely upon a clear and unambiguous specification of end-user requirements to inform decisions relating to the format, coverage, accuracy and complexity of the final deliverable. Invariably such deliverables will be a partial and incomplete abstraction of the real world where the benefits of higher accuracy and increased complexity must be traded against the cost of the project. As photogrammetric technologies move into the digital era, computerisation offers opportunities for the photogrammetrist to revisit established mapping traditions in order to explore new markets. One such market is that for three-dimensional Virtual Reality (VR) models for clients who have previously had little exposure to the capabilities, and limitations, of photogrammetry and may have radically different views on the cost/benefit trade-offs in producing geometric models. This paper will present some examples of the authors' recent experience of such markets, drawn from a number of research and commercial projects directed towards the modelling of complex man-made objects. This experience seems to indicate that suitably configured digital image archives may form an important deliverable for a wide range of photogrammetric projects and supplement, or even replace, more traditional CAD models.

  4. A Business Studies Oriented Taxonomy for Assessing Viewpoint Change through Sustainability Education: Messages, Measures and Moves

    ERIC Educational Resources Information Center

    Woodward, Russell; Hagerup, Clare

    2017-01-01

    This paper explores and deploys a business oriented taxonomy of decisions from which to ascertain change in student viewpoint regarding the study of sustainability modules. A review of conceptual and empirical studies to date on business cohorts' viewpoints regarding sustainability study notes the lack of business contextualization and the…

  5. Generation of realistic virtual nodules based on three-dimensional spatial resolution in lung computed tomography: A pilot phantom study.

    PubMed

    Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi

    2017-10-01

    The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from

  6. Consumer opinion on social policy approaches to promoting positive body image: Airbrushed media images and disclaimer labels.

    PubMed

    Paraskeva, Nicole; Lewis-Smith, Helena; Diedrichs, Phillippa C

    2017-02-01

    Disclaimer labels on airbrushed media images have generated political attention and advocacy as a social policy approach to promoting positive body image. Experimental research suggests that labelling is ineffective and consumers' viewpoints have been overlooked. A mixed-method study explored British consumers' ( N = 1555, aged 11-78 years) opinions on body image and social policy approaches. Thematic analysis indicated scepticism about the effectiveness of labelling images. Quantitatively, adults, although not adolescents, reported that labelling was unlikely to improve body image. Appearance diversity in media and reorienting social norms from appearance to function and health were perceived as effective strategies. Social policy and research implications are discussed.

  7. A Downloadable Three-Dimensional Virtual Model of the Visible Ear

    PubMed Central

    Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.

    2008-01-01

    Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433

  8. VirSSPA- a virtual reality tool for surgical planning workflow.

    PubMed

    Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T

    2009-03-01

    A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.

  9. Using Virtual Observatory Services in Sky View

    NASA Technical Reports Server (NTRS)

    McGlynn, Thomas A.

    2007-01-01

    For over a decade Skyview has provided astronomers and the public with easy access to survey and imaging data from all wavelength regimes. SkyView has pioneered many of the concepts that underlie the Virtual Observatory. Recently SkyView has been released as a distributable package which uses VO protocols to access image and catalog services. This chapter describes how to use the Skyview as a local service and how to customize it to access additional VO services and local data.

  10. Ascending and Descending in Virtual Reality: Simple and Safe System Using Passive Haptics.

    PubMed

    Nagao, Ryohei; Matsumoto, Keigo; Narumi, Takuji; Tanikawa, Tomohiro; Hirose, Michitaka

    2018-04-01

    This paper presents a novel interactive system that provides users with virtual reality (VR) experiences, wherein users feel as if they are ascending/descending stairs through passive haptic feedback. The passive haptic stimuli are provided by small bumps under the feet of users; these stimuli are provided to represent the edges of the stairs in the virtual environment. The visual stimuli of the stairs and shoes, provided by head-mounted displays, evoke a visuo-haptic interaction that modifies a user's perception of the floor shape. Our system enables users to experience all types of stairs, such as half-turn and spiral stairs, in a VR setting. We conducted a preliminary user study and two experiments to evaluate the proposed technique. The preliminary user study investigated the effectiveness of the basic idea associated with the proposed technique for the case of a user ascending stairs. The results demonstrated that the passive haptic feedback produced by the small bumps enhanced the user's feeling of presence and sense of ascending. We subsequently performed an experiment to investigate an improved viewpoint manipulation method and the interaction of the manipulation and haptics for both the ascending and descending cases. The experimental results demonstrated that the participants had a feeling of presence and felt a steep stair gradient under the condition of haptic feedback and viewpoint manipulation based on the characteristics of actual stair walking data. However, these results also indicated that the proposed system may not be as effective in providing a sense of descending stairs without an optimization of the haptic stimuli. We then redesigned the shape of the small bumps, and evaluated the design in a second experiment. The results indicated that the best shape to present haptic stimuli is a right triangle cross section in both the ascending and descending cases. Although it is necessary to install small protrusions in the determined direction, by

  11. An Online Image Analysis Tool for Science Education

    ERIC Educational Resources Information Center

    Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.

    2008-01-01

    This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…

  12. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging.

    PubMed

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel; Halaweish, Ahmed; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kV beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. The authors' report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.

  13. Technical Note: Improved CT number stability across patient size using dual-energy CT virtual monoenergetic imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Michalak, Gregory; Grimes, Joshua; Fletcher, Joel

    2016-01-15

    Purpose: The purpose of this study was to evaluate, over a wide range of phantom sizes, CT number stability achieved using two techniques for generating dual-energy computed tomography (DECT) virtual monoenergetic images. Methods: Water phantoms ranging in lateral diameter from 15 to 50 cm and containing a CT number test object were scanned on a DSCT scanner using both single-energy (SE) and dual-energy (DE) techniques. The SE tube potentials were 70, 80, 90, 100, 110, 120, 130, 140, and 150 kV; the DE tube potential pairs were 80/140, 70/150Sn, 80/150Sn, 90/150Sn, and 100/150Sn kV (Sn denotes that the 150 kVmore » beam was filtered with a 0.6 mm tin filter). Virtual monoenergetic images at energies ranging from 40 to 140 keV were produced from the DECT data using two algorithms, monoenergetic (mono) and monoenergetic plus (mono+). Particularly in large phantoms, water CT number errors and/or artifacts were observed; thus, datasets with water CT numbers outside ±10 HU or with noticeable artifacts were excluded from the study. CT numbers were measured to determine CT number stability across all phantom sizes. Results: Data exclusions were generally limited to cases when a SE or DE technique with a tube potential of less than 90 kV was used to scan a phantom larger than 30 cm. The 90/150Sn DE technique provided the most accurate water background over the large range of phantom sizes evaluated. Mono and mono+ provided equally improved CT number stability as a function of phantom size compared to SE; the average deviation in CT number was only 1.4% using 40 keV and 1.8% using 70 keV, while SE had an average deviation of 11.8%. Conclusions: The authors’ report demonstrates, across all phantom sizes, the improvement in CT number stability achieved with mono and mono+ relative to SE.« less

  14. Adapting line integral convolution for fabricating artistic virtual environment

    NASA Astrophysics Data System (ADS)

    Lee, Jiunn-Shyan; Wang, Chung-Ming

    2003-04-01

    Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.

  15. Cytopathology whole slide images and virtual microscopy adaptive tutorials: A software pilot

    PubMed Central

    Van Es, Simone L.; Pryor, Wendy M.; Belinson, Zack; Salisbury, Elizabeth L.; Velan, Gary M.

    2015-01-01

    Background: The constant growth in the body of knowledge in medicine requires pathologists and pathology trainees to engage in continuing education. Providing them with equitable access to efficient and effective forms of education in pathology (especially in remote and rural settings) is important, but challenging. Methods: We developed three pilot cytopathology virtual microscopy adaptive tutorials (VMATs) to explore a novel adaptive E-learning platform (AeLP) which can incorporate whole slide images for pathology education. We collected user feedback to further develop this educational material and to subsequently deploy randomized trials in both pathology specialist trainee and also medical student cohorts. Cytopathology whole slide images were first acquired then novel VMATs teaching cytopathology were created using the AeLP, an intelligent tutoring system developed by Smart Sparrow. The pilot was run for Australian pathologists and trainees through the education section of Royal College of Pathologists of Australasia website over a period of 9 months. Feedback on the usability, impact on learning and any technical issues was obtained using 5-point Likert scale items and open-ended feedback in online questionnaires. Results: A total of 181 pathologists and pathology trainees anonymously attempted the three adaptive tutorials, a smaller proportion of whom went on to provide feedback at the end of each tutorial. VMATs were perceived as effective and efficient E-learning tools for pathology education. User feedback was positive. There were no significant technical issues. Conclusion: During this pilot, the user feedback on the educational content and interface and the lack of technical issues were helpful. Large scale trials of similar online cytopathology adaptive tutorials were planned for the future. PMID:26605119

  16. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    NASA Astrophysics Data System (ADS)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  17. Virtual Team Governance: Addressing the Governance Mechanisms and Virtual Team Performance

    NASA Astrophysics Data System (ADS)

    Zhan, Yihong; Bai, Yu; Liu, Ziheng

    As technology has improved and collaborative software has been developed, virtual teams with geographically dispersed members spread across diverse physical locations have become increasingly prominent. Virtual team is supported by advancing communication technologies, which makes virtual teams able to largely transcend time and space. Virtual teams have changed the corporate landscape, which are more complex and dynamic than traditional teams since the members of virtual teams are spread on diverse geographical locations and their roles in the virtual team are different. Therefore, how to realize good governance of virtual team and arrive at good virtual team performance is becoming critical and challenging. Good virtual team governance is essential for a high-performance virtual team. This paper explores the performance and the governance mechanism of virtual team. It establishes a model to explain the relationship between the performance and the governance mechanisms in virtual teams. This paper is focusing on managing virtual teams. It aims to find the strategies to help business organizations to improve the performance of their virtual teams and arrive at the objectives of good virtual team management.

  18. Intra-prosthetic breast MR virtual navigation: a preliminary study for a new evaluation of silicone breast implants.

    PubMed

    Moschetta, Marco; Telegrafo, Michele; Capuano, Giulia; Rella, Leonarda; Scardapane, Arnaldo; Angelelli, Giuseppe; Stabile Ianora, Amato Antonio

    2013-10-01

    To assess the contribute of intra-prosthetic MRI virtual navigation for evaluating breast implants and detecting implant ruptures. Forty-five breast implants were evaluated by MR examination. Only patients with a clinical indication were assessed. A 1.5-T device equipped with a 4-channel breast coil was used by performing axial TSE-T2, axial silicone-only, axial silicone suppression and sagittal STIR images. The obtained dicom files were also analyzed by using virtual navigation software. Two blinded radiologists evaluated all MR and virtual images. Eight patients for a total of 13 implants underwent surgical replacement. Sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) were calculated for both imaging strategies. Intra-capsular rupture was diagnosed in 13 out of 45 (29%) implants by using MRI. Basing on virtual navigation, 9 (20%) cases of intra-capsular rupture were diagnosed. Sensitivity, specificity, accuracy, PPV and NPV values of 100%, 86%, 89%, 62% and 100%, respectively, were found for MRI. Virtual navigation increased the previous values up to 100%, 97%, 98%, 89% and 100%. Intra-prosthetic breast MR virtual navigation can represent an additional promising tool for the evaluation of breast implants being able to reduce false positives and to provide a more accurate detection of intra-capsular implant rupture signs. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Working Group Reports and Presentations: Virtual Worlds and Virtual Exploration

    NASA Technical Reports Server (NTRS)

    LAmoreaux, Claudia

    2006-01-01

    Scientists and engineers are continually developing innovative methods to capitalize on recent developments in computational power. Virtual worlds and virtual exploration present a new toolset for project design, implementation, and resolution. Replication of the physical world in the virtual domain provides stimulating displays to augment current data analysis techniques and to encourage public participation. In addition, the virtual domain provides stakeholders with a low cost, low risk design and test environment. The following document defines a virtual world and virtual exploration, categorizes the chief motivations for virtual exploration, elaborates upon specific objectives, identifies roadblocks and enablers for realizing the benefits, and highlights the more immediate areas of implementation (i.e. the action items). While the document attempts a comprehensive evaluation of virtual worlds and virtual exploration, the innovative nature of the opportunities presented precludes completeness. The authors strongly encourage readers to derive additional means of utilizing the virtual exploration toolset.

  20. [Virtual endoscopy with a volumetric reconstruction technic: the technical aspects].

    PubMed

    Pavone, P; Laghi, A; Panebianco, V; Catalano, C; Giura, R; Passariello, R

    1998-06-01

    We analyze the peculiar technical features of virtual endoscopy obtained with volume rendering. Our preliminary experience is based on virtual endoscopy images from volumetric data acquired with spiral CT (Siemens, Somatom Plus 4) using acquisition protocols standardized for different anatomic areas. Images are reformatted at the CT console, to obtain 1 mm thick contiguous slices, and transferred in DICOM format to an O2 workstation (Silicon Graphics, Mountain View CA, USA) with processor speed of 180 Mhz, 256 Mbyte RAM memory and 4.1 Gbyte hard disk. The software is Vitrea 1.0 (Vital Images, Fairfield, Iowa), running on a Unix platform. Image output is obtained through the Ethernet network to a Macintosh computer and a thermic printer (Kodak 8600 XLS). Diagnostic quality images were obtained in all the cases. Fly-through in the airways allowed correct evaluation of the main bronchi and of the origin of segmentary bronchi. In the vascular district, both carotid strictures and abdominal aortic aneurysms were depicted, with the same accuracy as with conventional reconstruction techniques. In the colon studies, polypoid lesions were correctly depicted in all the cases, with good correlation with endoscopic and double-contrast barium enema findings. In a case of lipoma of the ascending colon, virtual endoscopy allowed to study the colon both cranially and caudally to the lesion. The simultaneous evaluation of axial CT images permitted to characterize the lesion correctly on the basis of its density values. The peculiar feature of volume rendering is the use of the whole information inside the imaging volume to reconstruct three-dimensional images; no threshold values are used and no data are lost as opposite to conventional image reconstruction techniques. The different anatomic structures are visualized modifying the reciprocal opacities, showing the structures of no interest as translucent. The modulation of different opacities is obtained modifying the shape of the

  1. [Virtual reality in video-assisted thoracoscopic lung segmentectomy].

    PubMed

    Onuki, Takamasa

    2009-07-01

    The branching patterns of pulmonary arteries and veins vary greatly in the pulmonary hilar region and are very complicated. We attempted to reconstruct anatomically correct images using a freeware program. After uploading the images to a personal computer, bronchi, pulmonary arteries and veins were traced by moving up and down in the images and the location and thickness of the bronchi and pulmonary vasculture were indicated as different-sized cylinders. Next, based on the resulting numerical data, a 3D image was reconstructed using Metasequoia shareware. The reconstructed images can be manipulated by virtual surgical procedures such as reshaping, cutting and moving. These system would be very helpful in complicated video-assisted thoracic surgery such as lung segmentectomy.

  2. A virtual surgical environment for rehearsal of tympanomastoidectomy.

    PubMed

    Chan, Sonny; Li, Peter; Lee, Dong Hoon; Salisbury, J Kenneth; Blevins, Nikolas H

    2011-01-01

    This article presents a virtual surgical environment whose purpose is to assist the surgeon in preparation for individual cases. The system constructs interactive anatomical models from patient-specific, multi-modal preoperative image data, and incorporates new methods for visually and haptically rendering the volumetric data. Evaluation of the system's ability to replicate temporal bone dissections for tympanomastoidectomy, using intraoperative video of the same patients as guides, showed strong correlations between virtual and intraoperative anatomy. The result is a portable and cost-effective tool that may prove highly beneficial for the purposes of surgical planning and rehearsal.

  3. Virtual wall-based haptic-guided teleoperated surgical robotic system for single-port brain tumor removal surgery.

    PubMed

    Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young

    2017-01-01

    This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.

  4. Tangible imaging systems

    NASA Astrophysics Data System (ADS)

    Ferwerda, James A.

    2013-03-01

    We are developing tangible imaging systems1-4 that enable natural interaction with virtual objects. Tangible imaging systems are based on consumer mobile devices that incorporate electronic displays, graphics hardware, accelerometers, gyroscopes, and digital cameras, in laptop or tablet-shaped form-factors. Custom software allows the orientation of a device and the position of the observer to be tracked in real-time. Using this information, realistic images of threedimensional objects with complex textures and material properties are rendered to the screen, and tilting or moving in front of the device produces realistic changes in surface lighting and material appearance. Tangible imaging systems thus allow virtual objects to be observed and manipulated as naturally as real ones with the added benefit that object properties can be modified under user control. In this paper we describe four tangible imaging systems we have developed: the tangiBook - our first implementation on a laptop computer; tangiView - a more refined implementation on a tablet device; tangiPaint - a tangible digital painting application; and phantoView - an application that takes the tangible imaging concept into stereoscopic 3D.

  5. Introducing the Virtual Astronomy Multimedia Project

    NASA Astrophysics Data System (ADS)

    Wyatt, Ryan; Christensen, L. L.; Gauthier, A.; Hurt, R.

    2008-05-01

    The goal of the Virtual Astronomy Multimedia Project (VAMP) is to promote and vastly multiply the use of astronomy multimedia resources—from images and illustrations to animations, movies, and podcasts—and enable innovative future exploitation of a wide variety of outreach media by systematically linking resource archives worldwide. High-quality astronomical images, accompanied by rich caption and background information, abound on the web and yet prove notoriously difficult to locate efficiently using existing search tools. The Virtual Astronomy Multimedia Project offers a solution via the Astronomy Visualization Metadata (AVM) standard. Due to roll out in time for IYA2009, VAMP manages the design, implementation, and dissemination of the AVM standard for the education and public outreach astronomical imagery that observatories publish. VAMP will support implementations in World Wide Telescope, Google Sky, Portal to the Universe, and 365 Days of Astronomy, as well as Uniview and DigitalSky software designed specifically for planetariums. The VAMP workshop will introduce the AVM standard and describe its features, highlighting sample image tagging processes using diverse tools—the critical first step in getting media into VAMP. Participants with laptops will have an opportunity to experiment first hand, and workshop organizers will update a web page with system requirements and software options in advance of the conference (see http://virtualastronomy.org/ASP2008/ for links to resources). The workshop will also engage participants in a discussion and review of the innovative AVM image hierarchy taxonomy, which will soon be extended to other types of media.

  6. SU-E-J-104: Evaluation of Accuracy for Various Deformable Image Registrations with Virtual Deformation QA Software

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, S; Kim, K; Kim, M

    Purpose: The accuracy of deformable image registration (DIR) has a significant dosimetric impact in radiation treatment planning. We evaluated accuracy of various DIR algorithms using virtual deformation QA software (ImSimQA, Oncology System Limited, UK). Methods: The reference image (Iref) and volume (Vref) was first generated with IMSIMQA software. We deformed Iref with axial movement of deformation point and Vref depending on the type of deformation that are the deformation1 is to increase the Vref (relaxation) and the deformation 2 is to decrease the Vref (contraction) .The deformed image (Idef) and volume (Vdef) were inversely deformed to Iref and Vref usingmore » DIR algorithms. As a Result, we acquired deformed image (Iid) and volume (Vid). The DIR algorithms were optical flow (HS, IOF) and demons (MD, FD) of the DIRART. The image similarity evaluation between Iref and Iid was calculated by Normalized Mutual Information (NMI) and Normalized Cross Correlation (NCC). The value of Dice Similarity Coefficient (DSC) was used for evaluation of volume similarity. Results: When moving distance of deformation point was 4 mm, the value of NMI was above 1.81 and NCC was above 0.99 in all DIR algorithms. Since the degree of deformation was increased, the degree of image similarity was decreased. When the Vref increased or decreased about 12%, the difference between Vref and Vid was within ±5% regardless of the type of deformation. The value of DSC was above 0.95 in deformation1 except for the MD algorithm. In case of deformation 2, that of DSC was above 0.95 in all DIR algorithms. Conclusion: The Idef and Vdef have not been completely restored to Iref and Vref and the accuracy of DIR algorithms was different depending on the degree of deformation. Hence, the performance of DIR algorithms should be verified for the desired applications.« less

  7. Fast-response LCDs for virtual reality applications

    NASA Astrophysics Data System (ADS)

    Chen, Haiwei; Peng, Fenglin; Gou, Fangwang; Wand, Michael; Wu, Shin-Tson

    2017-02-01

    We demonstrate a fast-response liquid crystal display (LCD) with an ultra-low-viscosity nematic LC mixture. The measured average motion picture response time is only 6.88 ms, which is comparable to 6.66 ms for an OLED at a 120 Hz frame rate. If we slightly increase the TFT frame rate and/or reduce the backlight duty ratio, image blurs can be further suppressed to unnoticeable level. Potential applications of such an image-blur-free LCD for virtual reality, gaming monitors, and TVs are foreseeable.

  8. The Adaptive Effects Of Virtual Interfaces: Vestibulo-Ocular Reflex and Simulator Sickness.

    DTIC Science & Technology

    1998-08-07

    rearrangement: a pattern of stimulation differing from that existing as a result of normal interactions with the real world. Stimulus rearrangements can...is immersive and interactive . virtual interface: a system of transducers, signal processors, computer hardware and software that create an... interactive medium through which: 1) information is transmitted to the senses in the form of two- and three dimensional virtual images and 2) psychomotor

  9. A functional magnetic resonance imaging study of visuomotor processing in a virtual reality-based paradigm: Rehabilitation Gaming System.

    PubMed

    Prochnow, D; Bermúdez i Badia, S; Schmidt, J; Duff, A; Brunheim, S; Kleiser, R; Seitz, R J; Verschure, P F M J

    2013-05-01

    The Rehabilitation Gaming System (RGS) has been designed as a flexible, virtual-reality (VR)-based device for rehabilitation of neurological patients. Recently, training of visuomotor processing with the RGS was shown to effectively improve arm function in acute and chronic stroke patients. It is assumed that the VR-based training protocol related to RGS creates conditions that aid recovery by virtue of the human mirror neuron system. Here, we provide evidence for this assumption by identifying the brain areas involved in controlling the catching of approaching colored balls in the virtual environment of the RGS. We used functional magnetic resonance imaging of 18 right-handed healthy subjects (24 ± 3 years) in both active and imagination conditions. We observed that the imagery of target catching was related to activation of frontal, parietal, temporal, cingulate and cerebellar regions. We interpret these activations in relation to object processing, attention, mirror mechanisms, and motor intention. Active catching followed an anticipatory mode, and resulted in significantly less activity in the motor control areas. Our results provide preliminary support for the hypothesis underlying RGS that this novel neurorehabilitation approach engages human mirror mechanisms that can be employed for visuomotor training. © 2013 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Personal Virtual Libraries

    ERIC Educational Resources Information Center

    Pappas, Marjorie L.

    2004-01-01

    Virtual libraries are becoming more and more common. Most states have a virtual library. A growing number of public libraries have a virtual presence on the Web. Virtual libraries are a growing addition to school library media collections. The next logical step would be personal virtual libraries. A personal virtual library (PVL) is a collection…

  11. A Dialogic Vaccine to Bridge Opposing Cultural Viewpoints Based on Bakhtin's Views on Dialogue and Estrangement.

    PubMed

    Tajima, Atsushi

    2017-09-01

    Today, we face global conflicts between opposing ideologies that may be described in terms of cultural viewpoints and value judgments. It is difficult for individuals to determine whether ideologies are right or wrong because each ideology has its own worldview and sense of justice. Psychologists have an urgent mission to defuse the likelihood of fatal clashes between opposing cultural perspectives (ideologies), and to propose paradigms for peaceful coexistence. This paper examines the series of papers (Oh, Integrative Psychological and Behavioral Science, 51, 2017; Sakakibara, Integrative Psychological and Behavioral Science, 51, 2017; Watanabe, Integrative Psychological & Behavioral Science, 51, 2017) contributed to this volume that investigate the effects of high school and university educational programs promoting productive dialogue aimed at bridging, or transcending, conflicting perspectives among Japanese, Chinese, and Korean students. Here, I have evaluated the capacity of these educational programs to coordinate opposing cultural ideologies using the framework of Bakhtin's theories of dialogue and estrangement. Bakhtin viewed discourse with others who had opposing viewpoints as an opportunity to learn to overcome the one-sidedness of ideology, which ensues from automatic value judgments made by each speaker according to their culture, and he affirmed the value of flexible attitudes toward opposing viewpoints. In this paper, I review Bakhtin's theories relating to communication in a context of different cultural viewpoints, assess the general values of the educational practices mentioned above, and propose new concepts for applying these methods to other educational fields in the future using Bakhtin's theoretical viewpoints.

  12. Three-Dimensional Object Recognition and Registration for Robotic Grasping Systems Using a Modified Viewpoint Feature Histogram

    PubMed Central

    Chen, Chin-Sheng; Chen, Po-Chun; Hsu, Chih-Ming

    2016-01-01

    This paper presents a novel 3D feature descriptor for object recognition and to identify poses when there are six-degrees-of-freedom for mobile manipulation and grasping applications. Firstly, a Microsoft Kinect sensor is used to capture 3D point cloud data. A viewpoint feature histogram (VFH) descriptor for the 3D point cloud data then encodes the geometry and viewpoint, so an object can be simultaneously recognized and registered in a stable pose and the information is stored in a database. The VFH is robust to a large degree of surface noise and missing depth information so it is reliable for stereo data. However, the pose estimation for an object fails when the object is placed symmetrically to the viewpoint. To overcome this problem, this study proposes a modified viewpoint feature histogram (MVFH) descriptor that consists of two parts: a surface shape component that comprises an extended fast point feature histogram and an extended viewpoint direction component. The MVFH descriptor characterizes an object’s pose and enhances the system’s ability to identify objects with mirrored poses. Finally, the refined pose is further estimated using an iterative closest point when the object has been recognized and the pose roughly estimated by the MVFH descriptor and it has been registered on a database. The estimation results demonstrate that the MVFH feature descriptor allows more accurate pose estimation. The experiments also show that the proposed method can be applied in vision-guided robotic grasping systems. PMID:27886080

  13. Manifold decoding for neural representations of face viewpoint and gaze direction using magnetoencephalographic data.

    PubMed

    Kuo, Po-Chih; Chen, Yong-Sheng; Chen, Li-Fen

    2018-05-01

    The main challenge in decoding neural representations lies in linking neural activity to representational content or abstract concepts. The transformation from a neural-based to a low-dimensional representation may hold the key to encoding perceptual processes in the human brain. In this study, we developed a novel model by which to represent two changeable features of faces: face viewpoint and gaze direction. These features are embedded in spatiotemporal brain activity derived from magnetoencephalographic data. Our decoding results demonstrate that face viewpoint and gaze direction can be represented by manifold structures constructed from brain responses in the bilateral occipital face area and right superior temporal sulcus, respectively. Our results also show that the superposition of brain activity in the manifold space reveals the viewpoints of faces as well as directions of gazes as perceived by the subject. The proposed manifold representation model provides a novel opportunity to gain further insight into the processing of information in the human brain. © 2018 Wiley Periodicals, Inc.

  14. Virtual reality for stroke rehabilitation.

    PubMed

    Laver, Kate E; George, Stacey; Thomas, Susie; Deutsch, Judith E; Crotty, Maria

    2015-02-12

    Virtual reality and interactive video gaming have emerged as recent treatment approaches in stroke rehabilitation. In particular, commercial gaming consoles have been rapidly adopted in clinical settings. This is an update of a Cochrane Review published in 2011. To determine the efficacy of virtual reality compared with an alternative intervention or no intervention on upper limb function and activity. To determine the efficacy of virtual reality compared with an alternative intervention or no intervention on: gait and balance activity, global motor function, cognitive function, activity limitation, participation restriction and quality of life, voxels or regions of interest identified via imaging, and adverse events. Additionally, we aimed to comment on the feasibility of virtual reality for use with stroke patients by reporting on patient eligibility criteria and recruitment. We searched the Cochrane Stroke Group Trials Register (October 2013), the Cochrane Central Register of Controlled Trials (The Cochrane Library 2013, Issue 11), MEDLINE (1950 to November 2013), EMBASE (1980 to November 2013) and seven additional databases. We also searched trials registries and reference lists. Randomised and quasi-randomised trials of virtual reality ("an advanced form of human-computer interface that allows the user to 'interact' with and become 'immersed' in a computer-generated environment in a naturalistic fashion") in adults after stroke. The primary outcome of interest was upper limb function and activity. Secondary outcomes included gait and balance function and activity, and global motor function. Two review authors independently selected trials based on pre-defined inclusion criteria, extracted data and assessed risk of bias. A third review author moderated disagreements when required. The authors contacted investigators to obtain missing information. We included 37 trials that involved 1019 participants. Study sample sizes were generally small and interventions

  15. Hybrid rendering of the chest and virtual bronchoscopy [corrected].

    PubMed

    Seemann, M D; Seemann, O; Luboldt, W; Gebicke, K; Prime, G; Claussen, C D

    2000-10-30

    Thin-section spiral computed tomography was used to acquire the volume data sets of the thorax. The tracheobronchial system and pathological changes of the chest were visualized using a color-coded surface rendering method. The structures of interest were then superimposed on a volume rendering of the other thoracic structures, thus producing a hybrid rendering. The hybrid rendering technique exploit the advantages of both rendering methods and enable virtual bronchoscopic examinations using different representation models. Virtual bronchoscopic examinations with a transparent color-coded shaded-surface model enables the simultaneous visualization of both the airways and the adjacent structures behind of the tracheobronchial wall and therefore, offers a practical alternative to fiberoptic bronchoscopy. Hybrid rendering and virtual endoscopy obviate the need for time consuming detailed analysis and presentation of axial source images.

  16. Matching rendered and real world images by digital image processing

    NASA Astrophysics Data System (ADS)

    Mitjà, Carles; Bover, Toni; Bigas, Miquel; Escofet, Jaume

    2010-05-01

    Recent advances in computer-generated images (CGI) have been used in commercial and industrial photography providing a broad scope in product advertising. Mixing real world images with those rendered from virtual space software shows a more or less visible mismatching between corresponding image quality performance. Rendered images are produced by software which quality performance is only limited by the resolution output. Real world images are taken with cameras with some amount of image degradation factors as lens residual aberrations, diffraction, sensor low pass anti aliasing filters, color pattern demosaicing, etc. The effect of all those image quality degradation factors can be characterized by the system Point Spread Function (PSF). Because the image is the convolution of the object by the system PSF, its characterization shows the amount of image degradation added to any taken picture. This work explores the use of image processing to degrade the rendered images following the parameters indicated by the real system PSF, attempting to match both virtual and real world image qualities. The system MTF is determined by the slanted edge method both in laboratory conditions and in the real picture environment in order to compare the influence of the working conditions on the device performance; an approximation to the system PSF is derived from the two measurements. The rendered images are filtered through a Gaussian filter obtained from the taking system PSF. Results with and without filtering are shown and compared measuring the contrast achieved in different final image regions.

  17. Applications and challenges of digital pathology and whole slide imaging.

    PubMed

    Higgins, C

    2015-07-01

    Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.

  18. Modeling and Analysis Compute Environments, Utilizing Virtualization Technology in the Climate and Earth Systems Science domain

    NASA Astrophysics Data System (ADS)

    Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.

    2010-12-01

    Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.

  19. A virtual simulator designed for collision prevention in proton therapy.

    PubMed

    Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho

    2015-10-01

    In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.

  20. [What do virtual reality tools bring to child and adolescent psychiatry?

    PubMed

    Bioulac, S; de Sevin, E; Sagaspe, P; Claret, A; Philip, P; Micoulaud-Franchi, J A; Bouvard, M P

    2018-06-01

    the opportunity to administer controlled tasks such as the typical neuropsychological tools, but in an environment much more like a standard classroom. The virtual reality classroom offers several advantages compared to classical tools such as more realistic and lifelike environment but also records various measures in standardized conditions. Most of the studies using a virtual classroom have found that children with Attention Deficit/Hyperactivity Disorder make significantly fewer correct hits and more commission errors compared with controls. The virtual classroom has proven to be a good clinical tool for evaluation of attention in ADHD. For eating disorders, cognitive behavioural therapy (CBT) program enhanced by a body image specific component using virtual reality techniques was shown to be more efficient than cognitive behavioural therapy alone. The body image-specific component using virtual reality techniques boots efficiency and accelerates the CBT change process for eating disorders. Virtual reality is a relatively new technology and its application in child and adolescent psychiatry is recent. However, this technique is still in its infancy and much work is needed including controlled trials before it can be introduced in routine clinical use. Virtual reality interventions should also investigate how newly acquired skills are transferred to the real world. At present virtual reality can be considered a useful tool in evaluation and treatment for child and adolescent disorders. Copyright © 2017 L'Encéphale, Paris. Published by Elsevier Masson SAS. All rights reserved.

  1. New developments in digital pathology: from telepathology to virtual pathology laboratory.

    PubMed

    Kayser, Klaus; Kayser, Gian; Radziszowski, Dominik; Oehmann, Alexander

    2004-01-01

    To analyse the present status and future development of computerized diagnostic pathology in terms of work-flow integrative telepathology and virtual laboratory. Telepathology has left its childhood. The technical development of telepathology is mature, in contrast to that of virtual pathology. Two kinds of virtual pathology laboratories are emerging: a) those with distributed pathologists and distributed (>=1) laboratories associated to individual biopsy stations/surgical theatres, and b) distributed pathologists working in a centralized laboratory. Both are under technical development. Telepathology can be used for e-learning and e-training in pathology, as exemplarily demonstrated on Digital Lung Pathology Pathology (www.pathology-online.org). A virtual pathology institution (mode a) accepts a complete case with the patient's history, clinical findings, and (pre-selected) images for first diagnosis. The diagnostic responsibility is that of a conventional institution. The internet serves as platform for information transfer, and an open server such as the iPATH (http://telepath.patho.unibas.ch) for coordination and performance of the diagnostic procedure. The size of images has to be limited, and usual different magnifications have to be used. A group of pathologists is "on duty", or selects one member for a predefined duty period. The diagnostic statement of the pathologist(s) on duty is retransmitted to the sender with full responsibility. First experiences of a virtual pathology institution group working with the iPATH server (Dr. L. Banach, Dr. G. Haroske, Dr. I. Hurwitz, Dr. K. Kayser, Dr. K.D. Kunze, Dr. M. Oberholzer,) working with a small hospital of the Salomon islands are promising. A centralized virtual pathology institution (mode b) depends upon the digitalisation of a complete slide, and the transfer of large sized images to different pathologists working in one institution. The technical performance of complete slide digitalisation is still under

  2. Time-reversal in geophysics: the key for imaging a seismic source, generating a virtual source or imaging with no source (Invited)

    NASA Astrophysics Data System (ADS)

    Tourin, A.; Fink, M.

    2010-12-01

    The concept of time-reversal (TR) focusing was introduced in acoustics by Mathias Fink in the early nineties: a pulsed wave is sent from a source, propagates in an unknown media and is captured at a transducer array termed a “Time Reversal Mirror (TRM)”. Then the waveforms received at each transducer are flipped in time and sent back resulting in a wave converging at the original source regardless of the complexity of the propagation medium. TRMs have now been implemented in a variety of physical scenarios from GHz microwaves to MHz ultrasonics and to hundreds of Hz in ocean acoustics. Common to this broad range of scales is a remarkable robustness exemplified by observations that the more complex the medium (random or chaotic), the sharper the focus. A TRM acts as an antenna that uses complex environments to appear wider than it is, resulting for a broadband pulse, in a refocusing quality that does not depend on the TRM aperture. We show that the time-reversal concept is also at the heart of very active research fields in seismology and applied geophysics: imaging of seismic sources, passive imaging based on noise correlations, seismic interferometry, monitoring of CO2 storage using the virtual source method. All these methods can indeed be viewed in a unified framework as an application of the so-called time-reversal cavity approach. That approach uses the fact that a wave field can be predicted at any location inside a volume (without source) from the knowledge of both the field and its normal derivative on the surrounding surface S, which for acoustic scalar waves is mathematically expressed in the Helmholtz Kirchhoff (HK) integral. Thus in the first step of an ideal TR process, the field coming from a point-like source as well as its normal derivative should be measured on S. In a second step, the initial source is removed and monopole and dipole sources reemit the time reversal of the components measured in the first step. Instead of directly computing

  3. Virtual reality: Avatars in human spaceflight training

    NASA Astrophysics Data System (ADS)

    Osterlund, Jeffrey; Lawrence, Brad

    2012-02-01

    familiarize and assess operational processes, allow the ability to train virtually, experiment with "what if" scenarios, and expedite immediate changes to validate the design implementation are all parameters of interest in human spaceflight. Training benefits encompass providing 3D animation for post-training assessment, placement of avatars within 3D replicated work environments in assembling or processing hardware, offering various viewpoints of processes viewed and assessed giving the evaluators the ability to assess task feasibility and identify potential support equipment needs; and provide human factors determinations, such as reach, visibility, and accessibility. Multiple object motion capture technology provides an effective tool to train and assess ergonomic risks, simulations for determination of negative interactions between technicians and their proposed workspaces, and evaluation of spaceflight systems prior to, and as part of, the design process to contain costs and reduce schedule delays.

  4. An intersubject variable regional anesthesia simulator with a virtual patient architecture.

    PubMed

    Ullrich, Sebastian; Grottke, Oliver; Fried, Eduard; Frommen, Thorsten; Liao, Wei; Rossaint, Rolf; Kuhlen, Torsten; Deserno, Thomas M

    2009-11-01

    The main purpose is to provide an intuitive VR-based training environment for regional anesthesia (RA). The research question is how to process subject-specific datasets, organize them in a meaningful way and how to perform the simulation for peripheral regions. We propose a flexible virtual patient architecture and methods to process datasets. Image acquisition, image processing (especially segmentation), interactive nerve modeling and permutations (nerve instantiation) are described in detail. The simulation of electric impulse stimulation and according responses are essential for the training of peripheral RA and solved by an approach based on the electric distance. We have created an XML-based virtual patient database with several subjects. Prototypes of the simulation are implemented and run on multimodal VR hardware (e.g., stereoscopic display and haptic device). A first user pilot study has confirmed our approach. The virtual patient architecture enables support for arbitrary scenarios on different subjects. This concept can also be used for other simulators. In future work, we plan to extend the simulation and conduct further evaluations in order to provide a tool for routine training for RA.

  5. Should Student Evaluation of Teaching Play a Significant Role in the Formal Assessment of Dental Faculty? Two Viewpoints: Viewpoint 1: Formal Faculty Assessment Should Include Student Evaluation of Teaching and Viewpoint 2: Student Evaluation of Teaching Should Not Be Part of Formal Faculty Assessment.

    PubMed

    Rowan, Susan; Newness, Elmer J; Tetradis, Sotirios; Prasad, Joanne L; Ko, Ching-Chang; Sanchez, Arlene

    2017-11-01

    Student evaluation of teaching (SET) is often used in the assessment of faculty members' job performance and promotion and tenure decisions, but debate over this use of student evaluations has centered on the validity, reliability, and application of the data in assessing teaching performance. Additionally, the fear of student criticism has the potential of influencing course content delivery and testing measures. This Point/Counterpoint article reviews the potential utility of and controversy surrounding the use of SETs in the formal assessment of dental school faculty. Viewpoint 1 supports the view that SETs are reliable and should be included in those formal assessments. Proponents of this opinion contend that SETs serve to measure a school's effectiveness in support of its core mission, are valid measures based on feedback from the recipients of educational delivery, and provide formative feedback to improve faculty accountability to the institution. Viewpoint 2 argues that SETs should not be used for promotion and tenure decisions, asserting that higher SET ratings do not correlate with improved student learning. The advocates of this viewpoint contend that faculty members may be influenced to focus on student satisfaction rather than pedagogy, resulting in grade inflation. They also argue that SETs are prone to gender and racial biases and that SET results are frequently misinterpreted by administrators. Low response rates and monotonic response patterns are other factors that compromise the reliability of SETs.

  6. Three-dimensional face model reproduction method using multiview images

    NASA Astrophysics Data System (ADS)

    Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio

    1991-11-01

    This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.

  7. Binary-space-partitioned images for resolving image-based visibility.

    PubMed

    Fu, Chi-Wing; Wong, Tien-Tsin; Tong, Wai-Shun; Tang, Chi-Keung; Hanson, Andrew J

    2004-01-01

    We propose a novel 2D representation for 3D visibility sorting, the Binary-Space-Partitioned Image (BSPI), to accelerate real-time image-based rendering. BSPI is an efficient 2D realization of a 3D BSP tree, which is commonly used in computer graphics for time-critical visibility sorting. Since the overall structure of a BSP tree is encoded in a BSPI, traversing a BSPI is comparable to traversing the corresponding BSP tree. BSPI performs visibility sorting efficiently and accurately in the 2D image space by warping the reference image triangle-by-triangle instead of pixel-by-pixel. Multiple BSPIs can be combined to solve "disocclusion," when an occluded portion of the scene becomes visible at a novel viewpoint. Our method is highly automatic, including a tensor voting preprocessing step that generates candidate image partition lines for BSPIs, filters the noisy input data by rejecting outliers, and interpolates missing information. Our system has been applied to a variety of real data, including stereo, motion, and range images.

  8. Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.

    PubMed

    Benazzi, Stefano; Senck, Sascha

    2011-04-01

    In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  9. Virtual simulation as a learning method in interventional radiology.

    PubMed

    Avramov, Predrag; Avramov, Milena; Juković, Mirela; Kadić, Vuk; Till, Viktor

    2013-01-01

    Radiology is the fastest growing discipline of medicine thanks to the implementation of new technologies and very rapid development of imaging diagnostic procedures in the last few decades. On the other hand, the development of imaging diagnostic procedures has put aside the traditional gaining of experience by working on real patients, and the need for other alternatives of learning interventional radiology procedures has emerged. A new method of virtual approach was added as an excellent alternative to the currently known methods of training on physical models and animals. Virtual reality represents a computer-generated reconstruction of anatomical environment with tactile interactions and it enables operators not only to learn on their own mistakes without compromising the patient's safety, but also to enhance their knowledge and experience. It is true that studies published so far on the validity of endovascular simulators have shown certain improvement of operator's technical skills and reduction in time needed for the procedure, but on the other hand, it is still a question whether these skills are transferable to the real patients in the angio room. With further improvement of technology, shortcomings of virtual approach to interventional procedures learning will be less significant and this procedure is likely to become the only method of learning in the near future.

  10. Ultrasonic imaging of material flaws exploiting multipath information

    NASA Astrophysics Data System (ADS)

    Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.

    2011-05-01

    In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.

  11. Sensitivity-based virtual fields for the non-linear virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2017-09-01

    The virtual fields method is an approach to inversely identify material parameters using full-field deformation data. In this manuscript, a new set of automatically-defined virtual fields for non-linear constitutive models has been proposed. These new sensitivity-based virtual fields reduce the influence of noise on the parameter identification. The sensitivity-based virtual fields were applied to a numerical example involving small strain plasticity; however, the general formulation derived for these virtual fields is applicable to any non-linear constitutive model. To quantify the improvement offered by these new virtual fields, they were compared with stiffness-based and manually defined virtual fields. The proposed sensitivity-based virtual fields were consistently able to identify plastic model parameters and outperform the stiffness-based and manually defined virtual fields when the data was corrupted by noise.

  12. Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback

    NASA Astrophysics Data System (ADS)

    Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve

    2011-03-01

    Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.

  13. Self-management of chronic low back pain: Four viewpoints from patients and healthcare providers.

    PubMed

    Stenner, Paul; Cross, Vinnette; McCrum, Carol; McGowan, Janet; Defever, Emmanuel; Lloyd, Phil; Poole, Robert; Moore, Ann P

    2015-07-01

    A move towards self-management is central to health strategy around chronic low back pain, but its concept and meaning for those involved are poorly understood. In the reported study, four distinct and shared viewpoints on self-management were identified among people with pain and healthcare providers using Q methodology. Each construes self-management in a distinctive manner and articulates a different vision of change. Identification of similarities and differences among the viewpoints holds potential for enhancing communication between patients and healthcare providers and for better understanding the complexities of self-management in practice.

  14. Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology

    NASA Technical Reports Server (NTRS)

    Farley, Douglas L.

    2002-01-01

    The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.

  15. Virtual Reality

    DTIC Science & Technology

    1993-04-01

    until exhausted. SECURITY CLASSIFICATION OF THIS PAGE All other editions are obsolete. UNCLASSIFIED " VIRTUAL REALITY JAMES F. DAILEY, LIEUTENANT COLONEL...US" This paper reviews the exciting field of virtual reality . The author describes the basic concepts of virtual reality and finds that its numerous...potential benefits to society could revolutionize everyday life. The various components that make up a virtual reality system are described in detail

  16. Performance of today’s dual energy CT and future multi energy CT in virtual non-contrast imaging and in iodine quantification: A simulation study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian, E-mail: sebastian.faby@dkfz.de; Kuchenbecker, Stefan; Sawall, Stefan

    2015-07-15

    Purpose: To study the performance of different dual energy computed tomography (DECT) techniques, which are available today, and future multi energy CT (MECT) employing novel photon counting detectors in an image-based material decomposition task. Methods: The material decomposition performance of different energy-resolved CT acquisition techniques is assessed and compared in a simulation study of virtual non-contrast imaging and iodine quantification. The material-specific images are obtained via a statistically optimal image-based material decomposition. A projection-based maximum likelihood approach was used for comparison with the authors’ image-based method. The different dedicated dual energy CT techniques are simulated employing realistic noise models andmore » x-ray spectra. The authors compare dual source DECT with fast kV switching DECT and the dual layer sandwich detector DECT approach. Subsequent scanning and a subtraction method are studied as well. Further, the authors benchmark future MECT with novel photon counting detectors in a dedicated DECT application against the performance of today’s DECT using a realistic model. Additionally, possible dual source concepts employing photon counting detectors are studied. Results: The DECT comparison study shows that dual source DECT has the best performance, followed by the fast kV switching technique and the sandwich detector approach. Comparing DECT with future MECT, the authors found noticeable material image quality improvements for an ideal photon counting detector; however, a realistic detector model with multiple energy bins predicts a performance on the level of dual source DECT at 100 kV/Sn 140 kV. Employing photon counting detectors in dual source concepts can improve the performance again above the level of a single realistic photon counting detector and also above the level of dual source DECT. Conclusions: Substantial differences in the performance of today’s DECT approaches were found

  17. Impact of Virtual and Augmented Reality Based on Intraoperative Magnetic Resonance Imaging and Functional Neuronavigation in Glioma Surgery Involving Eloquent Areas.

    PubMed

    Sun, Guo-Chen; Wang, Fei; Chen, Xiao-Lei; Yu, Xin-Guang; Ma, Xiao-Dong; Zhou, Ding-Biao; Zhu, Ru-Yuan; Xu, Bai-Nan

    2016-12-01

    The utility of virtual and augmented reality based on functional neuronavigation and intraoperative magnetic resonance imaging (MRI) for glioma surgery has not been previously investigated. The study population consisted of 79 glioma patients and 55 control subjects. Preoperatively, the lesion and related eloquent structures were visualized by diffusion tensor tractography and blood oxygen level-dependent functional MRI. Intraoperatively, microscope-based functional neuronavigation was used to integrate the reconstructed eloquent structure and the real head and brain, which enabled safe resection of the lesion. Intraoperative MRI was used to verify brain shift during the surgical process and provided quality control during surgery. The control group underwent surgery guided by anatomic neuronavigation. Virtual and augmented reality protocols based on functional neuronavigation and intraoperative MRI provided useful information for performing tailored and optimized surgery. Complete resection was achieved in 55 of 79 (69.6%) glioma patients and 20 of 55 (36.4%) control subjects, with average resection rates of 95.2% ± 8.5% and 84.9% ± 15.7%, respectively. Both the complete resection rate and average extent of resection differed significantly between the 2 groups (P < 0.01). Postoperatively, the rate of preservation of neural functions (motor, visual field, and language) was lower in controls than in glioma patients at 2 weeks and 3 months (P < 0.01). Combining virtual and augmented reality based on functional neuronavigation and intraoperative MRI can facilitate resection of gliomas involving eloquent areas. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. System and method for progressive band selection for hyperspectral images

    NASA Technical Reports Server (NTRS)

    Fisher, Kevin (Inventor)

    2013-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.

  19. Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.

    PubMed

    Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars

    2017-10-01

    3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. [Application of virtual reality in surgical treatment of complex head and neck carcinoma].

    PubMed

    Zhou, Y Q; Li, C; Shui, C Y; Cai, Y C; Sun, R H; Zeng, D F; Wang, W; Li, Q L; Huang, L; Tu, J; Jiang, J

    2018-01-07

    Objective: To investigate the application of virtual reality technology in the preoperative evaluation of complex head and neck carcinoma and he value of virtual reality technology in surgical treatment of head and neck carcinoma. Methods: The image data of eight patients with complex head and neck carcinoma treated from December 2016 to May 2017 was acquired. The data were put into virtual reality system to built the three-dimensional anatomical model of carcinoma and to created the surgical scene. The process of surgery was stimulated by recognizing the relationship between tumor and surrounding important structures. Finally all patients were treated with surgery. And two typical cases were reported. Results: With the help of virtual reality, surgeons could adequately assess the condition of carcinoma and the security of operation and ensured the safety of operations. Conclusions: Virtual reality can provide the surgeons with the sensory experience in virtual surgery scenes and achieve the man-computer cooperation and stereoscopic assessment, which will ensure the safety of surgery. Virtual reality has a huge impact on guiding the traditional surgical procedure of head and neck carcinoma.