Virtual pyramid wavefront sensor for phase unwrapping.
Akondi, Vyas; Vohnsen, Brian; Marcos, Susana
2016-10-10
Noise affects wavefront reconstruction from wrapped phase data. A novel method of phase unwrapping is proposed with the help of a virtual pyramid wavefront sensor. The method was tested on noisy wrapped phase images obtained experimentally with a digital phase-shifting point diffraction interferometer. The virtuality of the pyramid wavefront sensor allows easy tuning of the pyramid apex angle and modulation amplitude. It is shown that an optimal modulation amplitude obtained by monitoring the Strehl ratio helps in achieving better accuracy. Through simulation studies and iterative estimation, it is shown that the virtual pyramid wavefront sensor is robust to random noise.
Designing 3 Dimensional Virtual Reality Using Panoramic Image
NASA Astrophysics Data System (ADS)
Wan Abd Arif, Wan Norazlinawati; Wan Ahmad, Wan Fatimah; Nordin, Shahrina Md.; Abdullah, Azrai; Sivapalan, Subarna
The high demand to improve the quality of the presentation in the knowledge sharing field is to compete with rapidly growing technology. The needs for development of technology based learning and training lead to an idea to develop an Oil and Gas Plant Virtual Environment (OGPVE) for the benefit of our future. Panoramic Virtual Reality learning based environment is essential in order to help educators overcome the limitations in traditional technical writing lesson. Virtual reality will help users to understand better by providing the simulations of real-world and hard to reach environment with high degree of realistic experience and interactivity. Thus, in order to create a courseware which will achieve the objective, accurate images of intended scenarios must be acquired. The panorama shows the OGPVE and helps to generate ideas to users on what they have learnt. This paper discusses part of the development in panoramic virtual reality. The important phases for developing successful panoramic image are image acquisition and image stitching or mosaicing. In this paper, the combination of wide field-of-view (FOV) and close up image used in this panoramic development are also discussed.
Low loss jammed-array wideband sawtooth filter based on a finite reflection virtually imaged array
NASA Astrophysics Data System (ADS)
Tan, Zhongwei; Cao, Dandan; Ding, Zhichao
2018-03-01
An edge filter is a potential technology in the fiber Bragg grating interrogation that has the advantages of fast response speed and suitability for dynamic measurement. To build a low loss, wideband jammed-array wideband sawtooth (JAWS) filter, a finite reflection virtually imaged array (FRVIA) is proposed and demonstrated. FRVIA is different from the virtually imaged phased array in that it has a low reflective front end. This change will lead to many differences in the device's performance in output optical intensity distribution, spectral resolution, output aperture, and tolerance of the manufacture errors. A low loss, wideband JAWS filter based on an FRVIA can provide an edge filter for each channel, respectively.
Radiology of colorectal cancer.
Pijl, M E J; Chaoui, A S; Wahl, R L; van Oostayen, J A
2002-05-01
In the past 20 years, the radiology of colorectal cancer has evolved from the barium enema to advanced imaging modalities like phased array magnetic resonance imaging (MRI), virtual colonoscopy and positron emission tomography (PET). Nowadays, primary rectal cancers are preferably imaged with transrectal ultrasound or MRI, while barium enema is still the most often used technique for imaging of colonic cancers. Virtual colonoscopy is rapidly evolving and might considerably change the imaging of colorectal cancer in the near future. The use of virtual colonoscopy for screening purposes and imaging of the colon in occlusive cancer or incomplete colonoscopies is currently under evaluation. The main role of PET is in detecting tumour recurrences, both locally and distantly. Techniques to fuse cross-sectional anatomical (computer tomography (CT) and MRI) and functional (PET) images are being developed. Apart from diagnostic imaging, the radiologists has added image-guided minimally invasive treatments of colorectal liver metastases to their arsenal. The radio-frequency ablation technique is now widely available, and can be used during laparotomy or percutaneously in selected cases.
Mangold, Stefanie; Thomas, Christoph; Fenchel, Michael; Vuust, Morten; Krauss, Bernhard; Ketelsen, Dominik; Tsiflikas, Ilias; Claussen, Claus D; Heuschmid, Martin
2012-07-01
To retrospectively determine which features of urinary calculi are associated with their detection after virtual elimination of contrast medium at dual-energy computed tomographic (CT) urography by using a novel tin filter. The institutional ethics committee approved this retrospective study, with waiver of informed consent. A total of 152 patients were examined with single-energy nonenhanced CT and dual-energy CT urography in the excretory phase (either 140 and 80 kV [n=44] or 140 and 100 kV [n=108], with tin filtration at 140 kV). The contrast medium in the renal pelvis and ureters was virtually removed from excretory phase images by using postprocessing software, resulting in virtual nonenhanced (VNE) images. The sensitivity regarding the detection of calculi on VNE images compared with true nonenhanced (TNE) images was determined, and interrater agreement was evaluated by using the Cohen k test. By using logistic regression, the influences of image noise, attenuation, and stone size, as well as attenuation of the contrast medium, on the stone detection rate were assessed. Threshold values with maximal sensitivity and specificity were calculated by means of receiver operating characteristic analyses. Eighty-seven stones were detected on TNE images; 46 calculi were identified on VNE images (sensitivity, 52.9%). Interrater agreement revealed a κ value of 0.95 with TNE images and 0.91 with VNE data. Size (long-axis diameter, P=.005; short-axis diameter, P=.041) and attenuation (P=.0005) of the calculi and image noise (P=.0031) were significantly associated with the detection rate on VNE images. As threshold values, size larger than 2.9 mm, maximum attenuation of the calculi greater than 387 HU, and image noise less than 20 HU were found. After virtual elimination of contrast medium, large (>2.9 mm) and high-attenuation (>387 HU) calculi can be detected with good reliability; smaller and lower attenuation calculi might be erased from images, especially with increased image noise. © RSNA, 2012.
Fat ViP MRI: Virtual Phantom Magnetic Resonance Imaging of water-fat systems.
Salvati, Roberto; Hitti, Eric; Bellanger, Jean-Jacques; Saint-Jalmes, Hervé; Gambarota, Giulio
2016-06-01
Virtual Phantom Magnetic Resonance Imaging (ViP MRI) is a method to generate reference signals on MR images, using external radiofrequency (RF) signals. The aim of this study was to assess the feasibility of ViP MRI to generate complex-data images of phantoms mimicking water-fat systems. Various numerical phantoms with a given fat fraction, T2* and field map were designed. The k-space of numerical phantoms was converted into RF signals to generate virtual phantoms. MRI experiments were performed at 4.7T using a multi-gradient-echo sequence on virtual and physical phantoms. The data acquisition of virtual and physical phantoms was simultaneous. Decomposition of the water and fat signals was performed using a complex-based water-fat separation algorithm. Overall, a good agreement was observed between the fat fraction, T2* and phase map values of the virtual and numerical phantoms. In particular, fat fractions of 10.5±0.1 (vs 10% of the numerical phantom), 20.3±0.1 (vs 20%) and 30.4±0.1 (vs 30%) were obtained in virtual phantoms. The ViP MRI method allows for generating imaging phantoms that i) mimic water-fat systems and ii) can be analyzed with water-fat separation algorithms based on complex data. Copyright © 2016 Elsevier Inc. All rights reserved.
Ferraz, Eduardo Gomes; Andrade, Lucio Costa Safira; dos Santos, Aline Rode; Torregrossa, Vinicius Rabelo; Rubira-Bullen, Izabel Regina Fischer; Sarmento, Viviane Almeida
2013-12-01
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols ("outline only" and "all-boundary lines"). Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %. The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24). During the designing of a virtual 3D reconstruction, both "outline only" and "all-boundary lines" segmentation protocols can be used. Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
Computational method for multi-modal microscopy based on transport of intensity equation
NASA Astrophysics Data System (ADS)
Li, Jiaji; Chen, Qian; Sun, Jiasong; Zhang, Jialin; Zuo, Chao
2017-02-01
In this paper, we develop the requisite theory to describe a hybrid virtual-physical multi-modal imaging system which yields quantitative phase, Zernike phase contrast, differential interference contrast (DIC), and light field moment imaging simultaneously based on transport of intensity equation(TIE). We then give the experimental demonstration of these ideas by time-lapse imaging of live HeLa cell mitosis. Experimental results verify that a tunable lens based TIE system, combined with the appropriate post-processing algorithm, can achieve a variety of promising imaging modalities in parallel with the quantitative phase images for the dynamic study of cellular processes.
Laboratory-based x-ray phase-contrast tomography enables 3D virtual histology
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Quade, Felix; Salditt, Tim
2016-09-01
Due to the large penetration depth and small wavelength hard x-rays offer a unique potential for 3D biomedical and biological imaging, combining capabilities of high resolution and large sample volume. However, in classical absorption-based computed tomography, soft tissue only shows a weak contrast, limiting the actual resolution. With the advent of phase-contrast methods, the much stronger phase shift induced by the sample can now be exploited. For high resolution, free space propagation behind the sample is particularly well suited to make the phase shift visible. Contrast formation is based on the self-interference of the transmitted beam, resulting in object-induced intensity modulations in the detector plane. As this method requires a sufficiently high degree of spatial coherence, it was since long perceived as a synchrotron-based imaging technique. In this contribution we show that by combination of high brightness liquid-metal jet microfocus sources and suitable sample preparation techniques, as well as optimized geometry, detection and phase retrieval, excellent three-dimensional image quality can be obtained, revealing the anatomy of a cobweb spider in high detail. This opens up new opportunities for 3D virtual histology of small organisms. Importantly, the image quality is finally augmented to a level accessible to automatic 3D segmentation.
NASA Astrophysics Data System (ADS)
Fiore, Antonio; Zhang, Jitao; Shao, Peng; Yun, Seok Hyun; Scarcelli, Giuliano
2016-05-01
Brillouin microscopy has recently emerged as a powerful technique to characterize the mechanical properties of biological tissue, cell, and biomaterials. However, the potential of Brillouin microscopy is currently limited to transparent samples, because Brillouin spectrometers do not have sufficient spectral extinction to reject the predominant non-Brillouin scattered light of turbid media. To overcome this issue, we combined a multi-pass Fabry-Perot interferometer with a two-stage virtually imaged phased array spectrometer. The Fabry-Perot etalon acts as an ultra-narrow band-pass filter for Brillouin light with high spectral extinction and low loss. We report background-free Brillouin spectra from Intralipid solutions and up to 100 μm deep within chicken muscle tissue.
Three-dimensional Image Fusion Guidance for Transjugular Intrahepatic Portosystemic Shunt Placement.
Tacher, Vania; Petit, Arthur; Derbel, Haytham; Novelli, Luigi; Vitellius, Manuel; Ridouani, Fourat; Luciani, Alain; Rahmouni, Alain; Duvoux, Christophe; Salloum, Chady; Chiaradia, Mélanie; Kobeiter, Hicham
2017-11-01
To assess the safety, feasibility and effectiveness of image fusion guidance with pre-procedural portal phase computed tomography with intraprocedural fluoroscopy for transjugular intrahepatic portosystemic shunt (TIPS) placement. All consecutive cirrhotic patients presenting at our interventional unit for TIPS creation from January 2015 to January 2016 were prospectively enrolled. Procedures were performed under general anesthesia in an interventional suite equipped with flat panel detector, cone-beam computed tomography (CBCT) and image fusion technique. All TIPSs were placed under image fusion guidance. After hepatic vein catheterization, an unenhanced CBCT acquisition was performed and co-registered with the pre-procedural portal phase CT images. A virtual path between hepatic vein and portal branch was made using the virtual needle path trajectory software. Subsequently, the 3D virtual path was overlaid on 2D fluoroscopy for guidance during portal branch cannulation. Safety, feasibility, effectiveness and per-procedural data were evaluated. Sixteen patients (12 males; median age 56 years) were included. Procedures were technically feasible in 15 of the 16 patients (94%). One procedure was aborted due to hepatic vein catheterization failure related to severe liver distortion. No periprocedural complications occurred within 48 h of the procedure. The median dose-area product was 91 Gy cm 2 , fluoroscopy time 15 min, procedure time 40 min and contrast media consumption 65 mL. Clinical benefit of the TIPS placement was observed in nine patients (56%). This study suggests that 3D image fusion guidance for TIPS is feasible, safe and effective. By identifying virtual needle path, CBCT enables real-time multiplanar guidance and may facilitate TIPS placement.
Generating virtual training samples for sparse representation of face images and face recognition
NASA Astrophysics Data System (ADS)
Du, Yong; Wang, Yu
2016-03-01
There are many challenges in face recognition. In real-world scenes, images of the same face vary with changing illuminations, different expressions and poses, multiform ornaments, or even altered mental status. Limited available training samples cannot convey these possible changes in the training phase sufficiently, and this has become one of the restrictions to improve the face recognition accuracy. In this article, we view the multiplication of two images of the face as a virtual face image to expand the training set and devise a representation-based method to perform face recognition. The generated virtual samples really reflect some possible appearance and pose variations of the face. By multiplying a training sample with another sample from the same subject, we can strengthen the facial contour feature and greatly suppress the noise. Thus, more human essential information is retained. Also, uncertainty of the training data is simultaneously reduced with the increase of the training samples, which is beneficial for the training phase. The devised representation-based classifier uses both the original and new generated samples to perform the classification. In the classification phase, we first determine K nearest training samples for the current test sample by calculating the Euclidean distances between the test sample and training samples. Then, a linear combination of these selected training samples is used to represent the test sample, and the representation result is used to classify the test sample. The experimental results show that the proposed method outperforms some state-of-the-art face recognition methods.
Evaluation of large format electron bombarded virtual phase CCDs as ultraviolet imaging detectors
NASA Technical Reports Server (NTRS)
Opal, Chet B.; Carruthers, George R.
1989-01-01
In conjunction with an external UV-sensitive cathode, an electron-bombarded CCD may be used as a high quantum efficiency/wide dynamic range photon-counting UV detector. Results are presented for the case of a 1024 x 1024, 18-micron square pixel virtual phase CCD used with an electromagnetically focused f/2 Schmidt camera, which yields excellent simgle-photoevent discrimination and counting efficiency. Attention is given to the vacuum-chamber arrangement used to conduct system tests and the CCD electronics and data-acquisition systems employed.
Hickethier, Tilman; Iuga, Andra-Iza; Lennartz, Simon; Hauger, Myriam; Byrtus, Jonathan; Luetkens, Julian A; Haneder, Stefan; Maintz, David; Doerner, Jonas
We aimed to determine optimal window settings for conventional polyenergetic (PolyE) and virtual monoenergetic images (MonoE) derived from abdominal portal venous phase computed tomography (CT) examinations on a novel dual-layer spectral-detector CT (SDCT). From 50 patients, SDCT data sets MonoE at 40 kiloelectron volt as well as PolyE were reconstructed and best individual window width and level values manually were assessed separately for evaluation of abdominal arteries as well as for liver lesions. Via regression analysis, optimized individual values were mathematically calculated. Subjective image quality parameters, vessel, and liver lesion diameters were measured to determine influences of different W/L settings. Attenuation and contrast-to-noise values were significantly higher in MonoE compared with PolyE. Compared with standard settings, almost all adjusted W/L settings varied significantly and yielded higher subjective scoring. No differences were found between manually adjusted and mathematically calculated W/L settings. PolyE and MonoE from abdominal portal venous phase SDCT examinations require appropriate W/L settings depending on reconstruction technique and assessment focus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yi, Jianbing, E-mail: yijianbing8@163.com; Yang, Xuan, E-mail: xyang0520@263.net; Li, Yan-Ran, E-mail: lyran@szu.edu.cn
2015-10-15
Purpose: Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. Methods: An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered atmore » points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. Results: The performances of the authors’ method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors’ method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors’ method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors’ method ranks 24 of 39. According to the index of the maximum shear stretch, the authors’ method is also efficient to describe the discontinuous motion at the lung boundaries. Conclusions: By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors’ method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.« less
Yi, Jianbing; Yang, Xuan; Chen, Guoliang; Li, Yan-Ran
2015-10-01
Image-guided radiotherapy is an advanced 4D radiotherapy technique that has been developed in recent years. However, respiratory motion causes significant uncertainties in image-guided radiotherapy procedures. To address these issues, an innovative lung motion estimation model based on a robust point matching is proposed in this paper. An innovative robust point matching algorithm using dynamic point shifting is proposed to estimate patient-specific lung motion during free breathing from 4D computed tomography data. The correspondence of the landmark points is determined from the Euclidean distance between the landmark points and the similarity between the local images that are centered at points at the same time. To ensure that the points in the source image correspond to the points in the target image during other phases, the virtual target points are first created and shifted based on the similarity between the local image centered at the source point and the local image centered at the virtual target point. Second, the target points are shifted by the constrained inverse function mapping the target points to the virtual target points. The source point set and shifted target point set are used to estimate the transformation function between the source image and target image. The performances of the authors' method are evaluated on two publicly available DIR-lab and POPI-model lung datasets. For computing target registration errors on 750 landmark points in six phases of the DIR-lab dataset and 37 landmark points in ten phases of the POPI-model dataset, the mean and standard deviation by the authors' method are 1.11 and 1.11 mm, but they are 2.33 and 2.32 mm without considering image intensity, and 1.17 and 1.19 mm with sliding conditions. For the two phases of maximum inhalation and maximum exhalation in the DIR-lab dataset with 300 landmark points of each case, the mean and standard deviation of target registration errors on the 3000 landmark points of ten cases by the authors' method are 1.21 and 1.04 mm. In the EMPIRE10 lung registration challenge, the authors' method ranks 24 of 39. According to the index of the maximum shear stretch, the authors' method is also efficient to describe the discontinuous motion at the lung boundaries. By establishing the correspondence of the landmark points in the source phase and the other target phases combining shape matching and image intensity matching together, the mismatching issue in the robust point matching algorithm is adequately addressed. The target registration errors are statistically reduced by shifting the virtual target points and target points. The authors' method with consideration of sliding conditions can effectively estimate the discontinuous motion, and the estimated motion is natural. The primary limitation of the proposed method is that the temporal constraints of the trajectories of voxels are not introduced into the motion model. However, the proposed method provides satisfactory motion information, which results in precise tumor coverage by the radiation dose during radiotherapy.
Multi-ray medical ultrasound simulation without explicit speckle modelling.
Tuzer, Mert; Yazıcı, Abdulkadir; Türkay, Rüştü; Boyman, Michael; Acar, Burak
2018-05-04
To develop a medical ultrasound (US) simulation method using T1-weighted magnetic resonance images (MRI) as the input that offers a compromise between low-cost ray-based and high-cost realistic wave-based simulations. The proposed method uses a novel multi-ray image formation approach with a virtual phased array transducer probe. A domain model is built from input MR images. Multiple virtual acoustic rays are emerged from each element of the linear transducer array. Reflected and transmitted acoustic energy at discrete points along each ray is computed independently. Simulated US images are computed by fusion of the reflected energy along multiple rays from multiple transducers, while phase delays due to differences in distances to transducers are taken into account. A preliminary implementation using GPUs is presented. Preliminary results show that the multi-ray approach is capable of generating view point-dependent realistic US images with an inherent Rician distributed speckle pattern automatically. The proposed simulator can reproduce the shadowing artefacts and demonstrates frequency dependence apt for practical training purposes. We also have presented preliminary results towards the utilization of the method for real-time simulations. The proposed method offers a low-cost near-real-time wave-like simulation of realistic US images from input MR data. It can further be improved to cover the pathological findings using an improved domain model, without any algorithmic updates. Such a domain model would require lesion segmentation or manual embedding of virtual pathologies for training purposes.
Comparison of virtual unenhanced CT images of the abdomen under different iodine flow rates.
Li, Yongrui; Li, Ye; Jackson, Alan; Li, Xiaodong; Huang, Ning; Guo, Chunjie; Zhang, Huimao
2017-01-01
To assess the effect of varying iodine flow rate (IFR) and iodine concentration on the quality of virtual unenhanced (VUE) images of the abdomen obtained with dual-energy CT. 94 subjects underwent unenhanced and triphasic contrast-enhanced CT scan of the abdomen, including arterial phase, portal venous phase, and delayed phase using dual-energy CT. Patients were randomized into 4 groups with different IFRs or iodine concentrations. VUE images were generated at 70 keV. The CT values, image noise, SNR and CNR of aorta, portal vein, liver, liver lesion, pancreatic parenchyma, spleen, erector spinae, and retroperitoneal fat were recorded. Dose-length product and effective dose for an examination with and without plain phase scan were calculated to assess the potential dose savings. Two radiologists independently assessed subjective image quality using a five-point scale. The Kolmogorov-Smirnov test was used first to test for normal distribution. Where data conformed to a normal distribution, analysis of variance was used to compare mean HU values, image noise, SNRs and CNRs for the 4 image sets. Where data distribution was not normal, a nonparametric test (Kruskal-Wallis test followed by stepwise step-down comparisons) was used. The significance level for all tests was 0.01 (two-sided) to allow for type 2 errors due to multiple testing. The CT numbers (HU) of VUE images showed no significant differences between the 4 groups (p > 0.05) or between different phases within the same group (p > 0.05). VUE images had equal or higher SNR and CNR than true unenhanced images. VUE images received equal or lower subjective image quality scores than unenhanced images but were of acceptable quality for diagnostic use. Calculated dose-length product and estimated dose showed that the use of VUE images in place of unenhanced images would be associated with a dose saving of 25%. VUE images can replace conventional unenhanced images. VUE images are not affected by varying iodine flow rates and iodine concentrations, and diagnostic examinations could be acquired with a potential dose saving of 25%.
True 3D digital holographic tomography for virtual reality applications
NASA Astrophysics Data System (ADS)
Downham, A.; Abeywickrema, U.; Banerjee, P. P.
2017-09-01
Previously, a single CCD camera has been used to record holograms of an object while the object is rotated about a single axis to reconstruct a pseudo-3D image, which does not show detailed depth information from all perspectives. To generate a true 3D image, the object has to be rotated through multiple angles and along multiple axes. In this work, to reconstruct a true 3D image including depth information, a die is rotated along two orthogonal axes, and holograms are recorded using a Mach-Zehnder setup, which are subsequently numerically reconstructed. This allows for the generation of multiple images containing phase (i.e., depth) information. These images, when combined, create a true 3D image with depth information which can be exported to a Microsoft® HoloLens for true 3D virtual reality.
Digital image compression for a 2f multiplexing optical setup
NASA Astrophysics Data System (ADS)
Vargas, J.; Amaya, D.; Rueda, E.
2016-07-01
In this work a virtual 2f multiplexing system was implemented in combination with digital image compression techniques and redundant information elimination. Depending on the image type to be multiplexed, a memory-usage saving of as much as 99% was obtained. The feasibility of the system was tested using three types of images, binary characters, QR codes, and grey level images. A multiplexing step was implemented digitally, while a demultiplexing step was implemented in a virtual 2f optical setup following real experimental parameters. To avoid cross-talk noise, each image was codified with a specially designed phase diffraction carrier that would allow the separation and relocation of the multiplexed images on the observation plane by simple light propagation. A description of the system is presented together with simulations that corroborate the method. The present work may allow future experimental implementations that will make use of all the parallel processing capabilities of optical systems.
Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.
Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua
2018-03-01
To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Ultrasound based mitral valve annulus tracking for off-pump beating heart mitral valve repair
NASA Astrophysics Data System (ADS)
Li, Feng P.; Rajchl, Martin; Moore, John; Peters, Terry M.
2014-03-01
Mitral regurgitation (MR) occurs when the mitral valve cannot close properly during systole. The NeoChordtool aims to repair MR by implanting artificial chordae tendineae on flail leaflets inside the beating heart, without a cardiopulmonary bypass. Image guidance is crucial for such a procedure due to the lack of direct vision of the targets or instruments. While this procedure is currently guided solely by transesophageal echocardiography (TEE), our previous work has demonstrated that guidance safety and efficiency can be significantly improved by employing augmented virtuality to provide virtual presentation of mitral valve annulus (MVA) and tools integrated with real time ultrasound image data. However, real-time mitral annulus tracking remains a challenge. In this paper, we describe an image-based approach to rapidly track MVA points on 2D/biplane TEE images. This approach is composed of two components: an image-based phasing component identifying images at optimal cardiac phases for tracking, and a registration component updating the coordinates of MVA points. Preliminary validation has been performed on porcine data with an average difference between manually and automatically identified MVA points of 2.5mm. Using a parallelized implementation, this approach is able to track the mitral valve at up to 10 images per second.
Material Separation Using Dual-Energy CT: Current and Emerging Applications.
Patino, Manuel; Prochowski, Andrea; Agrawal, Mukta D; Simeone, Frank J; Gupta, Rajiv; Hahn, Peter F; Sahani, Dushyant V
2016-01-01
Dual-energy (DE) computed tomography (CT) offers the opportunity to generate material-specific images on the basis of the atomic number Z and the unique mass attenuation coefficient of a particular material at different x-ray energies. Material-specific images provide qualitative and quantitative information about tissue composition and contrast media distribution. The most significant contribution of DE CT-based material characterization comes from the capability to assess iodine distribution through the creation of an image that exclusively shows iodine. These iodine-specific images increase tissue contrast and amplify subtle differences in attenuation between normal and abnormal tissues, improving lesion detection and characterization in the abdomen. In addition, DE CT enables computational removal of iodine influence from a CT image, generating virtual noncontrast images. Several additional materials, including calcium, fat, and uric acid, can be separated, permitting imaging assessment of metabolic imbalances, elemental deficiencies, and abnormal deposition of materials within tissues. The ability to obtain material-specific images from a single, contrast-enhanced CT acquisition can complement the anatomic knowledge with functional information, and may be used to reduce the radiation dose by decreasing the number of phases in a multiphasic CT examination. DE CT also enables generation of energy-specific and virtual monochromatic images. Clinical applications of DE CT leverage both material-specific images and virtual monochromatic images to expand the current role of CT and overcome several limitations of single-energy CT. (©)RSNA, 2016.
Augmented reality for breast imaging.
Rancati, Alberto; Angrigiani, Claudio; Nava, Maurizio B; Catanuto, Giuseppe; Rocco, Nicola; Ventrice, Fernando; Dorr, Julio
2018-06-01
Augmented reality (AR) enables the superimposition of virtual reality reconstructions onto clinical images of a real patient, in real time. This allows visualization of internal structures through overlying tissues, thereby providing a virtual transparency vision of surgical anatomy. AR has been applied to neurosurgery, which utilizes a relatively fixed space, frames, and bony references; the application of AR facilitates the relationship between virtual and real data. Augmented breast imaging (ABI) is described. Breast MRI studies for breast implant patients with seroma were performed using a Siemens 3T system with a body coil and a four-channel bilateral phased-array breast coil as the transmitter and receiver, respectively. Gadolinium was injected as a contrast agent (0.1 mmol/kg at 2 mL/s) using a programmable power injector. Dicom formatted images data from 10 MRI cases of breast implant seroma and 10 MRI cases with T1-2 N0 M0 breast cancer, were imported and transformed into augmented reality images. ABI demonstrated stereoscopic depth perception, focal point convergence, 3D cursor use, and joystick fly-through. ABI can improve clinical outcomes, providing an enhanced view of the structures to work on. It should be further studied to determine its utility in clinical practice.
Component-based target recognition inspired by human vision
NASA Astrophysics Data System (ADS)
Zheng, Yufeng; Agyepong, Kwabena
2009-05-01
In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.
Generation of realistic scene using illuminant estimation and mixed chromatic adaptation
NASA Astrophysics Data System (ADS)
Kim, Jae-Chul; Hong, Sang-Gi; Kim, Dong-Ho; Park, Jong-Hyun
2003-12-01
The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.
Virtual dissection of Thoropa miliaris tadpole using phase-contrast synchrotron microtomography
NASA Astrophysics Data System (ADS)
Fidalgo, G.; Colaço, M. V.; Nogueira, L. P.; Braz, D.; Silva, H. R.; Colaço, G.; Barroso, R. C.
2018-05-01
In this work, in-line phase-contrast synchrotron microtomography was used in order to study the external and internal morphology of Thoropa miliaris tadpoles. Whole-specimens of T. miliaris in larval stages of development 28, 37 and 42, collected in the municipality of Mangaratiba (Rio de Janeiro, Brazil) were used for the study. The samples were scanned in microtomography beamline (IMX) at the Brazilian Synchrotron Light Laboratory (LNLS). The phase-contrast technique allowed us to obtain high quality images which made possible the structures segmentation on the rendered volume by the Avizo graphic image editing software. The combination of high quality images and segmentation process provides adequate visualization of different organs and soft (liver, notochord, brain, crystalline, cartilages) and hard (elements of the bone skeleton) tissues.
Mouse blood vessel imaging by in-line x-ray phase-contrast imaging
NASA Astrophysics Data System (ADS)
Zhang, Xi; Liu, Xiao-Song; Yang, Xin-Rong; Chen, Shao-Liang; Zhu, Pei-Ping; Yuan, Qing-Xi
2008-10-01
It is virtually impossible to observe blood vessels by conventional x-ray imaging techniques without using contrast agents. In addition, such x-ray systems are typically incapable of detecting vessels with diameters less than 200 µm. Here we show that vessels as small as 30 µm could be detected using in-line phase-contrast x-ray imaging without the use of contrast agents. Image quality was greatly improved by replacing resident blood with physiological saline. Furthermore, an entire branch of the portal vein from the main axial portal vein to the eighth generation of branching could be captured in a single phase-contrast image. Prior to our work, detection of 30 µm diameter blood vessels could only be achieved using x-ray interferometry, which requires sophisticated x-ray optics. Our results thus demonstrate that in-line phase-contrast x-ray imaging, using physiological saline as a contrast agent, provides an alternative to the interferometric method that can be much more easily implemented and also offers the advantage of a larger field of view. A possible application of this methodology is in animal tumor models, where it can be used to observe tumor angiogenesis and the treatment effects of antineoplastic agents.
On the role of spatial phase and phase correlation in vision, illusion, and cognition
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of “cognition by phase correlation.” PMID:25954190
On the role of spatial phase and phase correlation in vision, illusion, and cognition.
Gladilin, Evgeny; Eils, Roland
2015-01-01
Numerous findings indicate that spatial phase bears an important cognitive information. Distortion of phase affects topology of edge structures and makes images unrecognizable. In turn, appropriately phase-structured patterns give rise to various illusions of virtual image content and apparent motion. Despite a large body of phenomenological evidence not much is known yet about the role of phase information in neural mechanisms of visual perception and cognition. Here, we are concerned with analysis of the role of spatial phase in computational and biological vision, emergence of visual illusions and pattern recognition. We hypothesize that fundamental importance of phase information for invariant retrieval of structural image features and motion detection promoted development of phase-based mechanisms of neural image processing in course of evolution of biological vision. Using an extension of Fourier phase correlation technique, we show that the core functions of visual system such as motion detection and pattern recognition can be facilitated by the same basic mechanism. Our analysis suggests that emergence of visual illusions can be attributed to presence of coherently phase-shifted repetitive patterns as well as the effects of acuity compensation by saccadic eye movements. We speculate that biological vision relies on perceptual mechanisms effectively similar to phase correlation, and predict neural features of visual pattern (dis)similarity that can be used for experimental validation of our hypothesis of "cognition by phase correlation."
Spatial cell firing during virtual navigation of open arenas by head-restrained mice.
Chen, Guifen; King, John Andrew; Lu, Yi; Cacucci, Francesca; Burgess, Neil
2018-06-18
We present a mouse virtual reality (VR) system which restrains head-movements to horizontal rotations, compatible with multi-photon imaging. This system allows expression of the spatial navigation and neuronal firing patterns characteristic of real open arenas (R). Comparing VR to R: place and grid, but not head-direction, cell firing had broader spatial tuning; place, but not grid, cell firing was more directional; theta frequency increased less with running speed; whereas increases in firing rates with running speed and place and grid cells' theta phase precession were similar. These results suggest that the omni-directional place cell firing in R may require local-cues unavailable in VR, and that the scale of grid and place cell firing patterns, and theta frequency, reflect translational motion inferred from both virtual (visual and proprioceptive) and real (vestibular translation and extra-maze) cues. By contrast, firing rates and theta phase precession appear to reflect visual and proprioceptive cues alone. © 2018, Chen et al.
Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc
2009-12-01
Cardiac CT achieves its high temporal resolution by lowering the scan range from 2pi to pi plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the pi range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2pi] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan pn(AF) by projectionwise averaging a set of neighboring partial scans pn(P) from the same perfusion examination (typically N approximately 30 phase-correlated partial scans distributed over 20 s and n = 1, ..., N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans pn(V) from the artificial full scan pn(AF). A standard reconstruction yields the corresponding images fn(P), fn(AF), and fn(V). Subtracting the virtual partial scan image fn(V) from the artificial full scan image fn(AF) yields an artifact image that can be used to correct the original partial scan image: fn(C) = fn(P) - fn(V) + fn(AF), where fn(C) is the corrected image. The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the corrected scans is up to 54% for the simulations and 90% for the measurements. The phase-correlated data now appear accurate enough for a quantitative analysis of cardiac perfusion.
Virtual imaging in sports broadcasting: an overview
NASA Astrophysics Data System (ADS)
Tan, Yi
2003-04-01
Virtual imaging technology is being used to augment television broadcasts -- virtual objects are seamlessly inserted into the video stream to appear as real entities to TV audiences. Virtual advertisements, the main application of this technology, are providing opportunities to improve the commercial value of television programming while enhancing the contents and the entertainment aspect of these programs. State-of-the-art technologies, such as image recognition, motion tracking and chroma keying, are central to a virtual imaging system. This paper reviews the general framework, the key techniques, and the sports broadcasting applications of virtual imaging technology.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Neuhaus, Victor; Große Hokamp, Nils; Abdullayev, Nuran; Maus, Volker; Kabbasch, Christoph; Mpotsaris, Anastasios; Maintz, David; Borggrefe, Jan
2018-03-01
To compare the image quality of virtual monoenergetic images and polyenergetic images reconstructed from dual-layer detector CT angiography (DLCTA). Thirty patients who underwent DLCTA of the head and neck were retrospectively identified and polyenergetic as well as virtual monoenergetic images (40 to 120 keV) were reconstructed. Signals (± SD) of the cervical and cerebral vessels as well as lateral pterygoid muscle and the air surrounding the head were measured to calculate the CNR and SNR. In addition, subjective image quality was assessed using a 5-point Likert scale. Student's t-test and Wilcoxon test were used to determine statistical significance. Compared to polyenergetic images, although noise increased with lower keV, CNR (p < 0.02) and SNR (p > 0.05) of the cervical, petrous and intracranial vessels were improved in virtual monoenergetic images at 40 keV and virtual monoenergetic images at 45 keV were also rated superior regarding vascular contrast, assessment of arteries close to the skull base and small arterial branches (p < 0.0001 each). Compared to polyenergetic images, virtual monoenergetic images reconstructed from DLCTA at low keV ranging from 40 to 45 keV improve the objective and subjective image quality of extra- and intracranial vessels and facilitate assessment of vessels close to the skull base and of small arterial branches. • Virtual monoenergetic images greatly improve attenuation, while noise only slightly increases. • Virtual monoenergetic images show superior contrast-to-noise ratios compared to polyenergetic images. • Virtual monoenergetic images significantly improve image quality at low keV.
NASA Astrophysics Data System (ADS)
Koehl, M.; Brigand, N.
2012-08-01
The site of the Engelbourg ruined castle in Thann, Alsace, France, has been for some years the object of all the attention of the city, which is the owner, and also of partners like historians and archaeologists who are in charge of its study. The valuation of the site is one of the main objective, as well as its conservation and its knowledge. The aim of this project is to use the environment of the virtual tour viewer as new base for an Archaeological Knowledge and Information System (AKIS). With available development tools we add functionalities in particular through diverse scripts that convert the viewer into a real 3D interface. By beginning with a first virtual tour that contains about fifteen panoramic images, the site of about 150 times 150 meters can be completely documented by offering the user a real interactivity and that makes visualization very concrete, almost lively. After the choice of pertinent points of view, panoramic images were realized. For the documentation, other sets of images were acquired at various seasons and climate conditions, which allow documenting the site in different environments and states of vegetation. The final virtual tour was deducted from them. The initial 3D model of the castle, which is virtual too, was also joined in the form of panoramic images for completing the understanding of the site. A variety of types of hotspots were used to connect the whole digital documentation to the site, including videos (as reports during the acquisition phases, during the restoration works, during the excavations, etc.), digital georeferenced documents (archaeological reports on the various constituent elements of the castle, interpretation of the excavations and the searches, description of the sets of collected objects, etc.). The completely personalized interface of the system allows either to switch from a panoramic image to another one, which is the classic case of the virtual tours, or to go from a panoramic photographic image to a panoramic virtual image. It also allows visualizing, in inlay, digital data, like ancient or recent plans, cross sections, descriptions, explanatory videos, sound comments, etc. This project has lead to very convincing results, that were validated by the historians and the archaeologists who have now an interactive tool, disseminated through internet, allowing at the same time to visit virtually the castle, but also to query the system which sends back localized information. The various levels of understanding and set up details, allow an approach of first level for broad Internet users, but also a deeper approach for a group of scientists who are associated to the development of the ruins of the castle and its environment.
DEKF system for crowding estimation by a multiple-model approach
NASA Astrophysics Data System (ADS)
Cravino, F.; Dellucca, M.; Tesei, A.
1994-03-01
A distributed extended Kalman filter (DEKF) network devoted to real-time crowding estimation for surveillance in complex scenes is presented. Estimation is carried out by extracting a set of significant features from sequences of images. Feature values are associated by virtual sensors with the estimated number of people using nonlinear models obtained in an off-line training phase. Different models are used, depending on the positions and dimensions of the crowded subareas detected in each image.
Time multiplexing for increased FOV and resolution in virtual reality
NASA Astrophysics Data System (ADS)
Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj
2017-06-01
We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.
ViRPET--combination of virtual reality and PET brain imaging
Majewski, Stanislaw; Brefczynski-Lewis, Julie
2017-05-23
Various methods, systems and apparatus are provided for brain imaging during virtual reality stimulation. In one example, among others, a system for virtual ambulatory environment brain imaging includes a mobile brain imager configured to obtain positron emission tomography (PET) scans of a subject in motion, and a virtual reality (VR) system configured to provide one or more stimuli to the subject during the PET scans. In another example, a method for virtual ambulatory environment brain imaging includes providing stimulation to a subject through a virtual reality (VR) system; and obtaining a positron emission tomography (PET) scan of the subject while moving in response to the stimulation from the VR system. The mobile brain imager can be positioned on the subject with an array of imaging photodetector modules distributed about the head of the subject.
Ancient administrative handwritten documents: X-ray analysis and imaging
Albertin, F.; Astolfo, A.; Stampanoni, M.; Peccenini, Eva; Hwu, Y.; Kaplan, F.; Margaritondo, G.
2015-01-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page ‘reading’. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project. PMID:25723946
Ancient administrative handwritten documents: X-ray analysis and imaging.
Albertin, F; Astolfo, A; Stampanoni, M; Peccenini, Eva; Hwu, Y; Kaplan, F; Margaritondo, G
2015-03-01
Handwritten characters in administrative antique documents from three centuries have been detected using different synchrotron X-ray imaging techniques. Heavy elements in ancient inks, present even for everyday administrative manuscripts as shown by X-ray fluorescence spectra, produce attenuation contrast. In most cases the image quality is good enough for tomography reconstruction in view of future applications to virtual page-by-page `reading'. When attenuation is too low, differential phase contrast imaging can reveal the characters from refractive index effects. The results are potentially important for new information harvesting strategies, for example from the huge Archivio di Stato collection, objective of the Venice Time Machine project.
A glimpse of gluons through deeply virtual compton scattering on the proton
DOE Office of Scientific and Technical Information (OSTI.GOV)
Defurne, Maxime; Jimenez-Arguello, A. Marti; Ahmed, Z.
The proton is composed of quarks and gluons, bound by the most elusive mechanism of strong interaction called confinement. In this work, the dynamics of quarks and gluons are investigated using deeply virtual Compton scattering (DVCS): produced by a multi-GeV electron, a highly virtual photon scatters off the proton which subsequently radiates a high energy photon. Similarly to holography, measuring not only the magnitude but also the phase of the DVCS amplitude allows to perform 3D images of the internal structure of the proton. The phase is made accessible through the quantum-mechanical interference of DVCS with the Bethe-Heitler (BH) process,more » in which the final photon is emitted by the electron rather than the proton. Here, we report herein the first full determination of the BH-DVCS interference by exploiting the distinct energy dependences of the DVCS and BH amplitudes. In the high energy regime where the scattering process is expected to occur off a single quark in the proton, these accurate measurements show an intriguing sensitivity to gluons, the carriers of the strong interaction.« less
A glimpse of gluons through deeply virtual compton scattering on the proton
Defurne, Maxime; Jimenez-Arguello, A. Marti; Ahmed, Z.; ...
2017-11-10
The proton is composed of quarks and gluons, bound by the most elusive mechanism of strong interaction called confinement. In this work, the dynamics of quarks and gluons are investigated using deeply virtual Compton scattering (DVCS): produced by a multi-GeV electron, a highly virtual photon scatters off the proton which subsequently radiates a high energy photon. Similarly to holography, measuring not only the magnitude but also the phase of the DVCS amplitude allows to perform 3D images of the internal structure of the proton. The phase is made accessible through the quantum-mechanical interference of DVCS with the Bethe-Heitler (BH) process,more » in which the final photon is emitted by the electron rather than the proton. Here, we report herein the first full determination of the BH-DVCS interference by exploiting the distinct energy dependences of the DVCS and BH amplitudes. In the high energy regime where the scattering process is expected to occur off a single quark in the proton, these accurate measurements show an intriguing sensitivity to gluons, the carriers of the strong interaction.« less
Software system design for the non-null digital Moiré interferometer
NASA Astrophysics Data System (ADS)
Chen, Meng; Hao, Qun; Hu, Yao; Wang, Shaopu; Li, Tengfei; Li, Lin
2016-11-01
Aspheric optical components are an indispensable part of modern optics systems. With the development of aspheric optical elements fabrication technique, high-precision figure error test method of aspheric surfaces is a quite urgent issue now. We proposed a digital Moiré interferometer technique (DMIT) based on partial compensation principle for aspheric and freeform surface measurement. Different from traditional interferometer, DMIT consists of a real and a virtual interferometer. The virtual interferometer is simulated with Zemax software to perform phase-shifting and alignment. We can get the results by a series of calculation with the real interferogram and virtual interferograms generated by computer. DMIT requires a specific, reliable software system to ensure its normal work. Image acquisition and data processing are two important parts in this system. And it is also a challenge to realize the connection between the real and virtual interferometer. In this paper, we present a software system design for DMIT with friendly user interface and robust data processing features, enabling us to acquire the figure error of the measured asphere. We choose Visual C++ as the software development platform and control the ideal interferometer by using hybrid programming with Zemax. After image acquisition and data transmission, the system calls image processing algorithms written with Matlab to calculate the figure error of the measured asphere. We test the software system experimentally. In the experiment, we realize the measurement of an aspheric surface and prove the feasibility of the software system.
B0 concomitant field compensation for MRI systems employing asymmetric transverse gradient coils.
Weavers, Paul T; Tao, Shengzhen; Trzasko, Joshua D; Frigo, Louis M; Shu, Yunhong; Frick, Matthew A; Lee, Seung-Kyun; Foo, Thomas K-F; Bernstein, Matt A
2018-03-01
Imaging gradients result in the generation of concomitant fields, or Maxwell fields, which are of increasing importance at higher gradient amplitudes. These time-varying fields cause additional phase accumulation, which must be compensated for to avoid image artifacts. In the case of gradient systems employing symmetric design, the concomitant fields are well described with second-order spatial variation. Gradient systems employing asymmetric design additionally generate concomitant fields with global (zeroth-order or B 0 ) and linear (first-order) spatial dependence. This work demonstrates a general solution to eliminate the zeroth-order concomitant field by applying the correct B 0 frequency shift in real time to counteract the concomitant fields. Results are demonstrated for phase contrast, spiral, echo-planar imaging (EPI), and fast spin-echo imaging. A global phase offset is reduced in the phase-contrast exam, and blurring is virtually eliminated in spiral images. The bulk image shift in the phase-encode direction is compensated for in EPI, whereas signal loss, ghosting, and blurring are corrected in the fast-spin echo images. A user-transparent method to compensate the zeroth-order concomitant field term by center frequency shifting is proposed and implemented. This solution allows all the existing pulse sequences-both product and research-to be retained without any modifications. Magn Reson Med 79:1538-1544, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Effects of whispering gallery mode in microsphere super-resolution imaging
NASA Astrophysics Data System (ADS)
Zhou, Song; Deng, Yongbo; Zhou, Wenchao; Yu, Muxin; Urbach, H. P.; Wu, Yihui
2017-09-01
Whispering Gallery modes have been presented in microscopic glass spheres or toruses with many applications. In this paper, the possible approaches to enhance the imaging resolution by Whispering Gallery modes are discussed, including evanescent waves coupling, transformed and illustration by Whispering Gallery modes. It shows that the high-order scattering modes play the dominant role in the reconstructed virtual image when the Whispering Gallery modes exist. Furthermore, we find that the high image resolution of electric dipoles can be achieved, when the out-of-phase components exist from the illustration of Whispering Gallery modes. Those results of our simulation could contribute to the knowledge of microsphere-assisted super-resolution imaging and its potential applications.
Augmented reality 3D display based on integral imaging
NASA Astrophysics Data System (ADS)
Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua
2017-02-01
Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.
NASA Technical Reports Server (NTRS)
Ellis, Stephen R.; Bucher, Urs J.; Statler, Irving C. (Technical Monitor)
1994-01-01
The influence of physically presented background stimuli on the perceived depth of optically overlaid, stereoscopic virtual images has been studied using headmounted stereoscopic, virtual image displays. These displays allow presentation of physically unrealizable stimulus combinations. Positioning of an opaque physical object either at the initial perceived depth of the virtual image or at a position substantially in front of the virtual image, causes the virtual image to perceptually move closer to the observer. In the case of objects positioned substantially in front of the virtual image, subjects often perceive the opaque object to become transparent. Evidence is presented that the apparent change of position caused by interposition of the physical object is not due to occlusion cues. According, it may have an alternative cause such as variation in the binocular vengeance position of the eyes caused by introduction of the physical object. This effect may complicate design of overlaid virtual image displays for near objects and appears to be related to the relative conspicuousness of the overlaid virtual image and the background. Consequently, it may be related to earlier analyses of John Foley which modeled open-loop pointing errors to stereoscopically presented points of light in terms of errors in determination of a reference point for interpretation of observed retinal disparities. Implications for the design of see-through displays for manufacturing will be discussed.
Wave-aberration control with a liquid crystal on silicon (LCOS) spatial phase modulator.
Fernández, Enrique J; Prieto, Pedro M; Artal, Pablo
2009-06-22
Liquid crystal on Silicon (LCOS) spatial phase modulators offer enhanced possibilities for adaptive optics applications in terms of response velocity and fidelity. Unlike deformable mirrors, they present a capability for reproducing discontinuous phase profiles. This ability also allows an increase in the effective stroke of the device by means of phase wrapping. The latter is only limited by the diffraction related effects that become noticeable as the number of phase cycles increase. In this work we estimated the ranges of generation of the Zernike polynomials as a means for characterizing the performance of the device. Sets of images systematically degraded with the different Zernike polynomials generated using a LCOS phase modulator have been recorded and compared with their theoretical digital counterparts. For each Zernike mode, we have found that image degradation reaches a limit for a certain coefficient value; further increase in the aberration amount has no additional effect in image quality. This behavior is attributed to the intensification of the 0-order diffraction. These results have allowed determining the usable limits of the phase modulator virtually free from diffraction artifacts. The results are particularly important for visual simulation and ophthalmic testing applications, although they are equally interesting for any adaptive optics application with liquid crystal based devices.
Ping Gong; Pengfei Song; Shigao Chen
2017-06-01
The development of ultrafast ultrasound imaging offers great opportunities to improve imaging technologies, such as shear wave elastography and ultrafast Doppler imaging. In ultrafast imaging, there are tradeoffs among image signal-to-noise ratio (SNR), resolution, and post-compounded frame rate. Various approaches have been proposed to solve this tradeoff, such as multiplane wave imaging or the attempts of implementing synthetic transmit aperture imaging. In this paper, we propose an ultrafast synthetic transmit aperture (USTA) imaging technique using Hadamard-encoded virtual sources with overlapping sub-apertures to enhance both image SNR and resolution without sacrificing frame rate. This method includes three steps: 1) create virtual sources using sub-apertures; 2) encode virtual sources using Hadamard matrix; and 3) add short time intervals (a few microseconds) between transmissions of different virtual sources to allow overlapping sub-apertures. The USTA was tested experimentally with a point target, a B-mode phantom, and in vivo human kidney micro-vessel imaging. Compared with standard coherent diverging wave compounding with the same frame rate, improvements on image SNR, lateral resolution (+33%, with B-mode phantom imaging), and contrast ratio (+3.8 dB, with in vivo human kidney micro-vessel imaging) have been achieved. The f-number of virtual sources, the number of virtual sources used, and the number of elements used in each sub-aperture can be flexibly adjusted to enhance resolution and SNR. This allows very flexible optimization of USTA for different applications.
Bubble behavior characteristics based on virtual binocular stereo vision
NASA Astrophysics Data System (ADS)
Xue, Ting; Xu, Ling-shuang; Zhang, Shang-zhen
2018-01-01
The three-dimensional (3D) behavior characteristics of bubble rising in gas-liquid two-phase flow are of great importance to study bubbly flow mechanism and guide engineering practice. Based on the dual-perspective imaging of virtual binocular stereo vision, the 3D behavior characteristics of bubbles in gas-liquid two-phase flow are studied in detail, which effectively increases the projection information of bubbles to acquire more accurate behavior features. In this paper, the variations of bubble equivalent diameter, volume, velocity and trajectory in the rising process are estimated, and the factors affecting bubble behavior characteristics are analyzed. It is shown that the method is real-time and valid, the equivalent diameter of the rising bubble in the stagnant water is periodically changed, and the crests and troughs in the equivalent diameter curve appear alternately. The bubble behavior characteristics as well as the spiral amplitude are affected by the orifice diameter and the gas volume flow.
Multi-viewpoint Image Array Virtual Viewpoint Rapid Generation Algorithm Based on Image Layering
NASA Astrophysics Data System (ADS)
Jiang, Lu; Piao, Yan
2018-04-01
The use of multi-view image array combined with virtual viewpoint generation technology to record 3D scene information in large scenes has become one of the key technologies for the development of integrated imaging. This paper presents a virtual viewpoint rendering method based on image layering algorithm. Firstly, the depth information of reference viewpoint image is quickly obtained. During this process, SAD is chosen as the similarity measure function. Then layer the reference image and calculate the parallax based on the depth information. Through the relative distance between the virtual viewpoint and the reference viewpoint, the image layers are weighted and panned. Finally the virtual viewpoint image is rendered layer by layer according to the distance between the image layers and the viewer. This method avoids the disadvantages of the algorithm DIBR, such as high-precision requirements of depth map and complex mapping operations. Experiments show that, this algorithm can achieve the synthesis of virtual viewpoints in any position within 2×2 viewpoints range, and the rendering speed is also very impressive. The average result proved that this method can get satisfactory image quality. The average SSIM value of the results relative to real viewpoint images can reaches 0.9525, the PSNR value can reaches 38.353 and the image histogram similarity can reaches 93.77%.
Resnick, Daniel K
2003-06-01
Fluoroscopy-based frameless stereotactic systems provide feedback to the surgeon using virtual fluoroscopic images. The real-life accuracy of these virtual images has not been compared with traditional fluoroscopy in a clinical setting. We prospectively studied 23 consecutive cases. In two cases, registration errors precluded the use of virtual fluoroscopy. Pedicle probes placed with virtual fluoroscopic imaging were imaged with traditional fluoroscopy in the remaining 21 cases. Position of the probes was judged to be ideal, acceptable but not ideal, or not acceptable based on the traditional fluoroscopic images. Virtual fluoroscopy was used to place probes in for 97 pedicles from L1 to the sacrum. Eighty-eight probes were judged to be in ideal position, eight were judged to be acceptable but not ideal, and one probe was judged to be in an unacceptable position. This probe was angled toward an adjacent disc space. Therefore, 96 of 97 probes placed using virtual fluoroscopy were found to be in an acceptable position. The positive predictive value for acceptable screw placement with virtual fluoroscopy compared with traditional fluoroscopy was 99%. A probe placed with virtual fluoroscopic guidance will be judged to be in an acceptable position when imaged with traditional fluoroscopy 99% of the time.
Virtual Images: Going Through the Looking Glass
NASA Astrophysics Data System (ADS)
Mota, Ana Rita; dos Santos, João Lopes
2017-01-01
Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is supported by the reflect-view, a useful device in geometrical optics classes because it allows a visual confrontation between virtual images and real objects that seemingly occupy the same region of space.
Ling, Hangjian; Katz, Joseph
2014-09-20
This paper deals with two issues affecting the application of digital holographic microscopy (DHM) for measuring the spatial distribution of particles in a dense suspension, namely discriminating between real and virtual images and accurate detection of the particle center. Previous methods to separate real and virtual fields have involved applications of multiple phase-shifted holograms, combining reconstructed fields of multiple axially displaced holograms, and analysis of intensity distributions of weakly scattering objects. Here, we introduce a simple approach based on simultaneously recording two in-line holograms, whose planes are separated by a short distance from each other. This distance is chosen to be longer than the elongated trace of the particle. During reconstruction, the real images overlap, whereas the virtual images are displaced by twice the distance between hologram planes. Data analysis is based on correlating the spatial intensity distributions of the two reconstructed fields to measure displacement between traces. This method has been implemented for both synthetic particles and a dense suspension of 2 μm particles. The correlation analysis readily discriminates between real and virtual images of a sample containing more than 1300 particles. Consequently, we can now implement DHM for three-dimensional tracking of particles when the hologram plane is located inside the sample volume. Spatial correlations within the same reconstructed field are also used to improve the detection of the axial location of the particle center, extending previously introduced procedures to suspensions of microscopic particles. For each cross section within a particle trace, we sum the correlations among intensity distributions in all planes located symmetrically on both sides of the section. This cumulative correlation has a sharp peak at the particle center. Using both synthetic and recorded particle fields, we show that the uncertainty in localizing the axial location of the center is reduced to about one particle's diameter.
Holographic near-eye display system based on double-convergence light Gerchberg-Saxton algorithm.
Sun, Peng; Chang, Shengqian; Liu, Siqi; Tao, Xiao; Wang, Chang; Zheng, Zhenrong
2018-04-16
In this paper, a method is proposed to implement noises reduced three-dimensional (3D) holographic near-eye display by phase-only computer-generated hologram (CGH). The CGH is calculated from a double-convergence light Gerchberg-Saxton (GS) algorithm, in which the phases of two virtual convergence lights are introduced into GS algorithm simultaneously. The first phase of convergence light is a replacement of random phase as the iterative initial value and the second phase of convergence light will modulate the phase distribution calculated by GS algorithm. Both simulations and experiments are carried out to verify the feasibility of the proposed method. The results indicate that this method can effectively reduce the noises in the reconstruction. Field of view (FOV) of the reconstructed image reaches 40 degrees and experimental light path in the 4-f system is shortened. As for 3D experiments, the results demonstrate that the proposed algorithm can present 3D images with 180cm zooming range and continuous depth cues. This method may provide a promising solution in future 3D augmented reality (AR) realization.
NASA Astrophysics Data System (ADS)
Phan, Khoi A.; Spence, Chris A.; Dakshina-Murthy, S.; Bala, Vidya; Williams, Alvina M.; Strener, Steve; Eandi, Richard D.; Li, Junling; Karklin, Linard
1999-12-01
As advanced process technologies in the wafer fabs push the patterning processes toward lower k1 factor for sub-wavelength resolution printing, reticles are required to use optical proximity correction (OPC) and phase-shifted mask (PSM) for resolution enhancement. For OPC/PSM mask technology, defect printability is one of the major concerns. Current reticle inspection tools available on the market sometimes are not capable of consistently differentiating between an OPC feature and a true random defect. Due to the process complexity and high cost associated with the making of OPC/PSM reticles, it is important for both mask shops and lithography engineers to understand the impact of different defect types and sizes to the printability. Aerial Image Measurement System (AIMS) has been used in the mask shops for a number of years for reticle applications such as aerial image simulation and transmission measurement of repaired defects. The Virtual Stepper System (VSS) provides an alternative method to do defect printability simulation and analysis using reticle images captured by an optical inspection or review system. In this paper, pre- programmed defects and repairs from a Defect Sensitivity Monitor (DSM) reticle with 200 nm minimum features (at 1x) will be studied for printability. The simulated resist lines by AIMS and VSS are both compared to SEM images of resist wafers qualitatively and quantitatively using CD verification.Process window comparison between unrepaired and repaired defects for both good and bad repair cases will be shown. The effect of mask repairs to resist pattern images for the binary mask case will be discussed. AIMS simulation was done at the International Sematech, Virtual stepper simulation at Zygo and resist wafers were processed at AMD-Submicron Development Center using a DUV lithographic process for 0.18 micrometer Logic process technology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakano, M; Kida, S; Masutani, Y
2014-06-01
Purpose: In the previous study, we developed time-ordered fourdimensional (4D) cone-beam CT (CBCT) technique to visualize nonperiodic organ motion, such as peristaltic motion of gastrointestinal organs and adjacent area, using half-scan reconstruction method. One important obstacle was that truncation of projection was caused by asymmetric location of flat-panel detector (FPD) in order to cover whole abdomen or pelvis in one rotation. In this study, we propose image mosaicing to extend projection data to make possible to reconstruct full field-of-view (FOV) image using half-scan reconstruction. Methods: The projections of prostate cancer patients were acquired using the X-ray Volume Imaging system (XVI,more » version 4.5) on Synergy linear accelerator system (Elekta, UK). The XVI system has three options of FOV, S, M and L, and M FOV was chosen for pelvic CBCT acquisition, with a FPD panel 11.5 cm offset. The method to produce extended projections consists of three main steps: First, normal three-dimensional (3D) reconstruction which contains whole pelvis was implemented using real projections. Second, virtual projections were produced by reprojection process of the reconstructed 3D image. Third, real and virtual projections in each angle were combined into one extended mosaic projection. Then, 4D CBCT images were reconstructed using our inhouse reconstruction software based on Feldkamp, Davis and Kress algorithm. The angular range of each reconstruction phase in the 4D reconstruction was 180 degrees, and the range moved as time progressed. Results: Projection data were successfully extended without discontinuous boundary between real and virtual projections. Using mosaic projections, 4D CBCT image sets were reconstructed without artifacts caused by the truncation, and thus, whole pelvis was clearly visible. Conclusion: The present method provides extended projections which contain whole pelvis. The presented reconstruction method also enables time-ordered 4D CBCT reconstruction of organs with non-periodic motion with full FOV without projection-truncation artifacts. This work was partly supported by the JSPS Core-to-Core Program(No. 23003). This work was partly supported by JSPS KAKENHI 24234567.« less
Planning Image-Based Measurements in Wind Tunnels by Virtual Imaging
NASA Technical Reports Server (NTRS)
Kushner, Laura Kathryn; Schairer, Edward T.
2011-01-01
Virtual imaging is routinely used at NASA Ames Research Center to plan the placement of cameras and light sources for image-based measurements in production wind tunnel tests. Virtual imaging allows users to quickly and comprehensively model a given test situation, well before the test occurs, in order to verify that all optical testing requirements will be met. It allows optimization of the placement of cameras and light sources and leads to faster set-up times, thereby decreasing tunnel occupancy costs. This paper describes how virtual imaging was used to plan optical measurements for three tests in production wind tunnels at NASA Ames.
Kim, Dae-Seung; Woo, Sang-Yoon; Yang, Hoon Joo; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon Jung; Yi, Won-Jin
2014-12-01
Accurate surgical planning and transfer of the planning in orthognathic surgery are very important in achieving a successful surgical outcome with appropriate improvement. Conventionally, the paper surgery is performed based on a 2D cephalometric radiograph, and the results are expressed using cast models and an articulator. We developed an integrated orthognathic surgery system with 3D virtual planning and image-guided transfer. The maxillary surgery of orthognathic patients was planned virtually, and the planning results were transferred to the cast model by image guidance. During virtual planning, the displacement of the reference points was confirmed by the displacement from conventional paper surgery at each procedure. The results of virtual surgery were transferred to the physical cast models directly through image guidance. The root mean square (RMS) difference between virtual surgery and conventional model surgery was 0.75 ± 0.51 mm for 12 patients. The RMS difference between virtual surgery and image-guidance results was 0.78 ± 0.52 mm, which showed no significant difference from the difference of conventional model surgery. The image-guided orthognathic surgery system integrated with virtual planning will replace physical model surgical planning and enable transfer of the virtual planning directly without the need for an intermediate splint. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
A novel augmented reality system of image projection for image-guided neurosurgery.
Mahvash, Mehran; Besharati Tabrizi, Leila
2013-05-01
Augmented reality systems combine virtual images with a real environment. To design and develop an augmented reality system for image-guided surgery of brain tumors using image projection. A virtual image was created in two ways: (1) MRI-based 3D model of the head matched with the segmented lesion of a patient using MRIcro software (version 1.4, freeware, Chris Rorden) and (2) Digital photograph based model in which the tumor region was drawn using image-editing software. The real environment was simulated with a head phantom. For direct projection of the virtual image to the head phantom, a commercially available video projector (PicoPix 1020, Philips) was used. The position and size of the virtual image was adjusted manually for registration, which was performed using anatomical landmarks and fiducial markers position. An augmented reality system for image-guided neurosurgery using direct image projection has been designed successfully and implemented in first evaluation with promising results. The virtual image could be projected to the head phantom and was registered manually. Accurate registration (mean projection error: 0.3 mm) was performed using anatomical landmarks and fiducial markers position. The direct projection of a virtual image to the patients head, skull, or brain surface in real time is an augmented reality system that can be used for image-guided neurosurgery. In this paper, the first evaluation of the system is presented. The encouraging first visualization results indicate that the presented augmented reality system might be an important enhancement of image-guided neurosurgery.
Sakabe, Daisuke; Funama, Yoshinori; Taguchi, Katsuyuki; Nakaura, Takeshi; Utsunomiya, Daisuke; Oda, Seitaro; Kidoh, Masafumi; Nagayama, Yasunori; Yamashita, Yasuyuki
2018-05-01
To investigate the image quality characteristics for virtual monoenergetic images compared with conventional tube-voltage image with dual-layer spectral CT (DLCT). Helical scans were performed using a first-generation DLCT scanner, two different sizes of acrylic cylindrical phantoms, and a Catphan phantom. Three different iodine concentrations were inserted into the phantom center. The single-tube voltage for obtaining virtual monoenergetic images was set to 120 or 140 kVp. Conventional 120- and 140-kVp images and virtual monoenergetic images (40-200-keV images) were reconstructed from slice thicknesses of 1.0 mm. The CT number and image noise were measured for each iodine concentration and water on the 120-kVp images and virtual monoenergetic images. The noise power spectrum (NPS) was also calculated. The iodine CT numbers for the iodinated enhancing materials were similar regardless of phantom size and acquisition method. Compared with the iodine CT numbers of the conventional 120-kVp images, those for the monoenergetic 40-, 50-, and 60-keV images increased by approximately 3.0-, 1.9-, and 1.3-fold, respectively. The image noise values for each virtual monoenergetic image were similar (for example, 24.6 HU at 40 keV and 23.3 HU at 200 keV obtained at 120 kVp and 30-cm phantom size). The NPS curves of the 70-keV and 120-kVp images for a 1.0-mm slice thickness over the entire frequency range were similar. Virtual monoenergetic images represent stable image noise over the entire energy spectrum and improved the contrast-to-noise ratio than conventional tube voltage using the dual-layer spectral detector CT. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Toepker, Michael; Moritz, Thomas; Krauss, Bernhard; Weber, Michael; Euller, Gordon; Mang, Thomas; Wolf, Florian; Herold, Christian J; Ringl, Helmut
2012-03-01
To evaluate the reliability of attenuation values in virtual non-contrast images (VNC) reconstructed from contrast-enhanced, dual-energy scans performed on a second-generation dual-energy CT scanner, compared to single-energy, non-contrast images (TNC). Sixteen phantoms containing a mixture of contrast agent and water at different attenuations (0-1400 HU) were investigated on a Definition Flash-CT scanner using a single-energy scan at 120 kV and a DE-CT protocol (100 kV/SN140 kV). For clinical assessment, 86 patients who received a dual-phase CT, containing an unenhanced single-energy scan at 120 kV and a contrast enhanced (110 ml Iomeron 400 mg/ml; 4 ml/s) DE-CT (100 kV/SN140 kV) in an arterial (n=43) or a venous phase, were retrospectively analyzed. Mean attenuation was measured within regions of interest of the phantoms and in different tissue types of the patients within the corresponding VNC and TNC images. Paired t-tests and Pearson correlation were used for statistical analysis. For all phantoms, mean attenuation in VNC was 5.3±18.4 HU, with respect to water. In 86 patients overall, 2637 regions were measured in TNC and VNC images, with a mean difference between TNC and VNC of -3.6±8.3 HU. In 91.5% (n=2412) of all cases, absolute differences between TNC and VNC were under 15HU, and, in 75.3% (n=1986), differences were under 10 HU. Second-generation dual-energy CT based VNC images provide attenuation values close to those of TNC. To avoid possible outliers multiple measurements are recommended especially for measurements in the spleen, the mesenteric fat, and the aorta. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Virtual Images: Going through the Looking Glass
ERIC Educational Resources Information Center
Mota, Ana Rita; Lopes dos Santos, João
2017-01-01
Virtual images are often introduced through a "geometric" perspective, with little conceptual or qualitative illustrations, hindering a deeper understanding of this physical concept. In this paper, we present two rather simple observations that force a critical reflection on the optical nature of a virtual image. This approach is…
Research on inosculation between master of ceremonies or players and virtual scene in virtual studio
NASA Astrophysics Data System (ADS)
Li, Zili; Zhu, Guangxi; Zhu, Yaoting
2003-04-01
A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.
Gain and phase of perceived virtual rotation evoked by electrical vestibular stimuli
Peters, Ryan M.; Rasman, Brandon G.; Inglis, J. Timothy
2015-01-01
Galvanic vestibular stimulation (GVS) evokes a perception of rotation; however, very few quantitative data exist on the matter. We performed psychophysical experiments on virtual rotations experienced when binaural bipolar electrical stimulation is applied over the mastoids. We also performed analogous real whole body yaw rotation experiments, allowing us to compare the frequency response of vestibular perception with (real) and without (virtual) natural mechanical stimulation of the semicircular canals. To estimate the gain of vestibular perception, we measured direction discrimination thresholds for virtual and real rotations. Real direction discrimination thresholds decreased at higher frequencies, confirming multiple previous studies. Conversely, virtual direction discrimination thresholds increased at higher frequencies, implying low-pass filtering of the virtual perception process occurring potentially anywhere between afferent transduction and cortical responses. To estimate the phase of vestibular perception, participants manually tracked their perceived position during sinusoidal virtual and real kinetic stimulation. For real rotations, perceived velocity was approximately in phase with actual velocity across all frequencies. Perceived virtual velocity was in phase with the GVS waveform at low frequencies (0.05 and 0.1 Hz). As frequency was increased to 1 Hz, the phase of perceived velocity advanced relative to the GVS waveform. Therefore, at low frequencies GVS is interpreted as an angular velocity signal and at higher frequencies GVS becomes interpreted increasingly as an angular position signal. These estimated gain and phase spectra for vestibular perception are a first step toward generating well-controlled virtual vestibular percepts, an endeavor that may reveal the usefulness of GVS in the areas of clinical assessment, neuroprosthetics, and virtual reality. PMID:25925318
Gain and phase of perceived virtual rotation evoked by electrical vestibular stimuli.
Peters, Ryan M; Rasman, Brandon G; Inglis, J Timothy; Blouin, Jean-Sébastien
2015-07-01
Galvanic vestibular stimulation (GVS) evokes a perception of rotation; however, very few quantitative data exist on the matter. We performed psychophysical experiments on virtual rotations experienced when binaural bipolar electrical stimulation is applied over the mastoids. We also performed analogous real whole body yaw rotation experiments, allowing us to compare the frequency response of vestibular perception with (real) and without (virtual) natural mechanical stimulation of the semicircular canals. To estimate the gain of vestibular perception, we measured direction discrimination thresholds for virtual and real rotations. Real direction discrimination thresholds decreased at higher frequencies, confirming multiple previous studies. Conversely, virtual direction discrimination thresholds increased at higher frequencies, implying low-pass filtering of the virtual perception process occurring potentially anywhere between afferent transduction and cortical responses. To estimate the phase of vestibular perception, participants manually tracked their perceived position during sinusoidal virtual and real kinetic stimulation. For real rotations, perceived velocity was approximately in phase with actual velocity across all frequencies. Perceived virtual velocity was in phase with the GVS waveform at low frequencies (0.05 and 0.1 Hz). As frequency was increased to 1 Hz, the phase of perceived velocity advanced relative to the GVS waveform. Therefore, at low frequencies GVS is interpreted as an angular velocity signal and at higher frequencies GVS becomes interpreted increasingly as an angular position signal. These estimated gain and phase spectra for vestibular perception are a first step toward generating well-controlled virtual vestibular percepts, an endeavor that may reveal the usefulness of GVS in the areas of clinical assessment, neuroprosthetics, and virtual reality. Copyright © 2015 the American Physiological Society.
Estimation of Image Sensor Fill Factor Using a Single Arbitrary Image
Wen, Wei; Khatibi, Siamak
2017-01-01
Achieving a high fill factor is a bottleneck problem for capturing high-quality images. There are hardware and software solutions to overcome this problem. In the solutions, the fill factor is known. However, this is an industrial secrecy by most image sensor manufacturers due to its direct effect on the assessment of the sensor quality. In this paper, we propose a method to estimate the fill factor of a camera sensor from an arbitrary single image. The virtual response function of the imaging process and sensor irradiance are estimated from the generation of virtual images. Then the global intensity values of the virtual images are obtained, which are the result of fusing the virtual images into a single, high dynamic range radiance map. A non-linear function is inferred from the original and global intensity values of the virtual images. The fill factor is estimated by the conditional minimum of the inferred function. The method is verified using images of two datasets. The results show that our method estimates the fill factor correctly with significant stability and accuracy from one single arbitrary image according to the low standard deviation of the estimated fill factors from each of images and for each camera. PMID:28335459
Virtual Seismic Observation (VSO) with Sparsity-Promotion Inversion
NASA Astrophysics Data System (ADS)
Tiezhao, B.; Ning, J.; Jianwei, M.
2017-12-01
Large station interval leads to low resolution images, sometimes prevents people from obtaining images in concerned regions. Sparsity-promotion inversion, a useful method to recover missing data in industrial field acquisition, can be lent to interpolate seismic data on none-sampled sites, forming Virtual Seismic Observation (VSO). Traditional sparsity-promotion inversion suffers when coming up with large time difference in adjacent sites, which we concern most and use shift method to improve it. The procedure of the interpolation is that we first employ low-pass filter to get long wavelength waveform data and shift the waveforms of the same wave in different seismograms to nearly same arrival time. Then we use wavelet-transform-based sparsity-promotion inversion to interpolate waveform data on none-sampled sites and filling a phase in each missing trace. Finally, we shift back the waveforms to their original arrival times. We call our method FSIS (Filtering, Shift, Interpolation, Shift) interpolation. By this way, we can insert different virtually observed seismic phases into none-sampled sites and get dense seismic observation data. For testing our method, we randomly hide the real data in a site and use the rest to interpolate the observation on that site, using direct interpolation or FSIS method. Compared with directly interpolated data, interpolated data with FSIS can keep amplitude better. Results also show that the arrival times and waveforms of those VSOs well express the real data, which convince us that our method to form VSOs are applicable. In this way, we can provide needed data for some advanced seismic technique like RTM to illuminate shallow structures.
NASA Astrophysics Data System (ADS)
Feng, Bin; Shi, Zelin; Zhang, Chengshuo; Xu, Baoshu; Zhang, Xiaodong
2016-05-01
The point spread function (PSF) inconsistency caused by temperature variation leads to artifacts in decoded images of a wavefront coding infrared imaging system. Therefore, this paper proposes an analytical model for the effect of temperature variation on the PSF consistency. In the proposed model, a formula for the thermal deformation of an optical phase mask is derived. This formula indicates that a cubic optical phase mask (CPM) is still cubic after thermal deformation. A proposed equivalent cubic phase mask (E-CPM) is a virtual and room-temperature lens which characterizes the optical effect of temperature variation on the CPM. Additionally, a calculating method for PSF consistency after temperature variation is presented. Numerical simulation illustrates the validity of the proposed model and some significant conclusions are drawn. Given the form parameter, the PSF consistency achieved by a Ge-material CPM is better than the PSF consistency by a ZnSe-material CPM. The effect of the optical phase mask on PSF inconsistency is much slighter than that of the auxiliary lens group. A large form parameter of the CPM will introduce large defocus-insensitive aberrations, which improves the PSF consistency but degrades the room-temperature MTF.
Evaluation of the Elekta Symmetry ™ 4D IGRT system by using a moving lung phantom
NASA Astrophysics Data System (ADS)
Shin, Hun-Joo; Kim, Shin-Wook; Kay, Chul Seung; Seo, Jae-Hyuk; Lee, Gi-Woong; Kang, Ki-Mun; Jang, Hong Seok; Kang, Young-nam
2015-07-01
Purpose: 4D cone-beam computed tomography (CBCT) is a beneficial tool for the treatment of movable tumors because it can help us to understand where the tumors are actually located and it has a precise treatment plan. However, general CBCT images have a limitation in that they cannot perfectly perform a sophisticated registration. On the other hand, the Symmetry TM 4D image-guided radiation therapy (IGRT) system of Elekta offers a 4D CBCT registration option. In this study, we evaluated the usefulness of Symmetry TM . Method and Materials: Planning CT images of the CIRS moving lung phantom were acquired 4D multi-detector CT (MDCT), and the images were sorted as 10 phases from 0% phase to 90% phase. The thickness of the CT images was 1 mm. Acquired MDCT images were transferred to the contouring software, and a virtual target was generated. A one-arc volumetric-modulated arc therapy (VMAT) plan was performed by using the treatment planning system on the virtual target. Finally, the movement of the phantom was verified by using the XVI Symmetry TM system. Results: The physical movement of the CIRS moving lung phantom was ±10.0 mm in the superiorinferior direction, ±1.0 mm in the lateral direction, and ±2.5 mm in the anterior-posterior direction. The movement of the phantom was measured from the 4D MDCT registration as ±10.2 mm in the superior-inferior direction, ±0.9 mm in the lateral direction, and ±2.45 mm in the anterior-posterior direction. The movement of the phantom was measured from the SymmetryTM registration as ±10.1 mm in the superior-inferior direction, ±0.9 mm in the lateral direction, and ±2.4 mm in the anterior-posterior direction. Conclusion: We confirmed that 4D CBCT is a beneficial tool for the treatment of movable tumors, and that the 4D registration of SymmetryTM can increase the precision of the registration when a movable tumor is the target of radiation treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, L; Yin, F; Cai, J
Purpose: To develop a methodology of constructing physiological-based virtual thorax phantom based on hyperpolarized (HP) gas tagging MRI for evaluating deformable image registration (DIR). Methods: Three healthy subjects were imaged at both the end-of-inhalation (EOI) and the end-of-exhalation (EOE) phases using a high-resolution (2.5mm isovoxel) 3D proton MRI, as well as a hybrid MRI which combines HP gas tagging MRI and a low-resolution (4.5mm isovoxel) proton MRI. A sparse tagging displacement vector field (tDVF) was derived from the HP gas tagging MRI by tracking the displacement of tagging grids between EOI and EOE. Using the tDVF and the high-resolution MRmore » images, we determined the motion model of the entire thorax in the following two steps: 1) the DVF inside of lungs was estimated based on the sparse tDVF using a novel multi-step natural neighbor interpolation method; 2) the DVF outside of lungs was estimated from the DIR between the EOI and EOE images (Velocity AI). The derived motion model was then applied to the high-resolution EOI image to create a deformed EOE image, forming the virtual phantom where the motion model provides the ground truth of deformation. Five DIR methods were evaluated using the developed virtual phantom. Errors in DVF magnitude (Em) and angle (Ea) were determined and compared for each DIR method. Results: Among the five DIR methods, free form deformation produced DVF results that are most closely resembling the ground truth (Em=1.04mm, Ea=6.63°). The two DIR methods based on B-spline produced comparable results (Em=2.04mm, Ea=13.66°; and Em =2.62mm, Ea=17.67°), and the two optical-flow methods produced least accurate results (Em=7.8mm; Ea=53.04°; Em=4.45mm, Ea=31.02°). Conclusion: A methodology for constructing physiological-based virtual thorax phantom based on HP gas tagging MRI has been developed. Initial evaluation demonstrated its potential as an effective tool for robust evaluation of DIR in the lung.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edelen, A. L.; Biedron, S. G.; Milton, S. V.
At present, a variety of image-based diagnostics are used in particle accelerator systems. Often times, these are viewed by a human operator who then makes appropriate adjustments to the machine. Given recent advances in using convolutional neural networks (CNNs) for image processing, it should be possible to use image diagnostics directly in control routines (NN-based or otherwise). This is especially appealing for non-intercepting diagnostics that could run continuously during beam operation. Here, we show results of a first step toward implementing such a controller: our trained CNN can predict multiple simulated downstream beam parameters at the Fermilab Accelerator Science andmore » Technology (FAST) facility's low energy beamline using simulated virtual cathode laser images, gun phases, and solenoid strengths.« less
Stereoscopic virtual reality models for planning tumor resection in the sellar region.
Wang, Shou-sen; Zhang, Shang-ming; Jing, Jun-jie
2012-11-28
It is difficult for neurosurgeons to perceive the complex three-dimensional anatomical relationships in the sellar region. To investigate the value of using a virtual reality system for planning resection of sellar region tumors. The study included 60 patients with sellar tumors. All patients underwent computed tomography angiography, MRI-T1W1, and contrast enhanced MRI-T1W1 image sequence scanning. The CT and MRI scanning data were collected and then imported into a Dextroscope imaging workstation, a virtual reality system that allows structures to be viewed stereoscopically. During preoperative assessment, typical images for each patient were chosen and printed out for use by the surgeons as references during surgery. All sellar tumor models clearly displayed bone, the internal carotid artery, circle of Willis and its branches, the optic nerve and chiasm, ventricular system, tumor, brain, soft tissue and adjacent structures. Depending on the location of the tumors, we simulated the transmononasal sphenoid sinus approach, transpterional approach, and other approaches. Eleven surgeons who used virtual reality models completed a survey questionnaire. Nine of the participants said that the virtual reality images were superior to other images but that other images needed to be used in combination with the virtual reality images. The three-dimensional virtual reality models were helpful for individualized planning of surgery in the sellar region. Virtual reality appears to be promising as a valuable tool for sellar region surgery in the future.
Fluidic Control of Virtual Aerosurfaces
2007-04-01
are measured phase-locked to the actuation waveform (the imaged field measures 32 x 32 mm, and the magnification is 33 pm/pixel). It should be noted...collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ORGANIZATION. 1. REPORT DATE...aerodynamic effects through pulse modulated actuation near the trailing edge, it is possible to maintain the same aerodynamic performance at
Vision-based overlay of a virtual object into real scene for designing room interior
NASA Astrophysics Data System (ADS)
Harasaki, Shunsuke; Saito, Hideo
2001-10-01
In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Localized intraoperative virtual endoscopy (LIVE) for surgical guidance in 16 skull base patients.
Haerle, Stephan K; Daly, Michael J; Chan, Harley; Vescan, Allan; Witterick, Ian; Gentili, Fred; Zadeh, Gelareh; Kucharczyk, Walter; Irish, Jonathan C
2015-01-01
Previous preclinical studies of localized intraoperative virtual endoscopy-image-guided surgery (LIVE-IGS) for skull base surgery suggest a potential clinical benefit. The first aim was to evaluate the registration accuracy of virtual endoscopy based on high-resolution magnetic resonance imaging under clinical conditions. The second aim was to implement and assess real-time proximity alerts for critical structures during skull base drilling. Patients consecutively referred for sinus and skull base surgery were enrolled in this prospective case series. Five patients were used to check registration accuracy and feasibility with the subsequent 11 patients being treated under LIVE-IGS conditions with presentation to the operating surgeon (phase 2). Sixteen skull base patients were endoscopically operated on by using image-based navigation while LIVE-IGS was tested in a clinical setting. Workload was quantitatively assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Real-time localization of the surgical drill was accurate to ~1 to 2 mm in all cases. The use of 3-mm proximity alert zones around the carotid arteries and optic nerve found regular clinical use, as the median minimum distance between the tracked drill and these structures was 1 mm (0.2-3.1 mm) and 0.6 mm (0.2-2.5 mm), respectively. No statistical differences were found in the NASA-TLX indicators for this experienced surgical cohort. Real-time proximity alerts with virtual endoscopic guidance was sufficiently accurate under clinical conditions. Further clinical evaluation is required to evaluate the potential surgical benefits, particularly for less experienced surgeons or for teaching purposes. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2014.
The virtual microscopy database-sharing digital microscope images for research and education.
Lee, Lisa M J; Goldman, Haviva M; Hortsch, Michael
2018-02-14
Over the last 20 years, virtual microscopy has become the predominant modus of teaching the structural organization of cells, tissues, and organs, replacing the use of optical microscopes and glass slides in a traditional histology or pathology laboratory setting. Although virtual microscopy image files can easily be duplicated, creating them requires not only quality histological glass slides but also an expensive whole slide microscopic scanner and massive data storage devices. These resources are not available to all educators and researchers, especially at new institutions in developing countries. This leaves many schools without access to virtual microscopy resources. The Virtual Microscopy Database (VMD) is a new resource established to address this problem. It is a virtual image file-sharing website that allows researchers and educators easy access to a large repository of virtual histology and pathology image files. With the support from the American Association of Anatomists (Bethesda, MD) and MBF Bioscience Inc. (Williston, VT), registration and use of the VMD are currently free of charge. However, the VMD site is restricted to faculty and staff of research and educational institutions. Virtual Microscopy Database users can upload their own collection of virtual slide files, as well as view and download image files for their own non-profit educational and research purposes that have been deposited by other VMD clients. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.
Factors to keep in mind when introducing virtual microscopy.
Glatz-Krieger, Katharina; Spornitz, Udo; Spatz, Alain; Mihatsch, Michael J; Glatz, Dieter
2006-03-01
Digitization of glass slides and delivery of so-called virtual slides (VS) emulating a real microscope over the Internet have become reality due to recent improvements in technology. We have implemented a virtual microscope for instruction of medical students and for continuing medical education. Up to 30,000 images per slide are captured using a microscope with an automated stage. The images are post-processed and then served by a plain hypertext transfer protocol (http)-server. A virtual slide client (vMic) based on Macromedia's Flash MX, a highly accepted technology available on every modern Web browser, has been developed. All necessary virtual slide parameters are stored in an XML file together with the image. Evaluation of the courses by questionnaire indicated that most students and many but not all pathologists regard virtual slides as an adequate replacement for traditional slides. All our virtual slides are publicly accessible over the World Wide Web (WWW) at http://vmic.unibas.ch . Recently, several commercially available virtual slide acquisition systems (VSAS) have been developed that use various technologies to acquire and distribute virtual slides. These systems differ in speed, image quality, compatibility, viewer functionalities and price. This paper gives an overview of the factors to keep in mind when introducing virtual microscopy.
Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F
2018-05-08
Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.
Photorealistic scene presentation: virtual video camera
NASA Astrophysics Data System (ADS)
Johnson, Michael J.; Rogers, Joel Clark W.
1994-07-01
This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on `a priori' information. It accesses out-the-window `snapshots' from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a `clear-day' video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible.
A Magnifying Glass for Virtual Imaging of Subwavelength Resolution by Transformation Optics.
Sun, Fei; Guo, Shuwei; Liu, Yichao; He, Sailing
2018-06-14
Traditional magnifying glasses can give magnified virtual images with diffraction-limited resolution, that is, detailed information is lost. Here, a novel magnifying glass by transformation optics, referred to as a "superresolution magnifying glass" (SMG) is designed, which can produce magnified virtual images with a predetermined magnification factor and resolve subwavelength details (i.e., light sources with subwavelength distances can be resolved). Based on theoretical calculations and reductions, a metallic plate structure to produce the reduced SMG in microwave frequencies, which gives good performance verified by both numerical simulations and experimental results, is proposed and realized. The function of SMG is to create a superresolution virtual image, unlike traditional superresolution imaging devices that create real images. The proposed SMG will create a new branch of superresolution imaging technology. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Sarkar, V; Gutierrez, A N; Stathakis, S; Swanson, G P; Papanikolaou, N
2009-01-01
The purpose of this project was to develop a software platform to produce a virtual fluoroscopic image as an aid for permanent prostate seed implants. Seed location information from a pre-plan was extracted and used as input to in-house developed software to produce a virtual fluoroscopic image. In order to account for differences in patient positioning on the day of treatment, the user was given the ability to make changes to the virtual image. The system has been shown to work as expected for all test cases. The system allows for quick (on average less than 10 sec) generation of a virtual fluoroscopic image of the planned seed pattern. The image can be used as a verification tool to aid the physician in evaluating how close the implant is to the planned distribution throughout the procedure and enable remedial action should a large deviation be observed.
Shinohara, Gen; Morita, Kiyozo; Hoshino, Masato; Ko, Yoshihiro; Tsukube, Takuro; Kaneko, Yukihiro; Morishita, Hiroyuki; Oshima, Yoshihiro; Matsuhisa, Hironori; Iwaki, Ryuma; Takahashi, Masashi; Matsuyama, Takaaki; Hashimoto, Kazuhiro; Yagi, Naoto
2016-11-01
The feasibility of synchrotron radiation-based phase-contrast computed tomography (PCCT) for visualization of the atrioventricular (AV) conduction axis in human whole heart specimens was tested using four postmortem structurally normal newborn hearts obtained at autopsy. A PCCT imaging system at the beamline BL20B2 in a SPring-8 synchrotron radiation facility was used. The PCCT imaging of the conduction system was performed with "virtual" slicing of the three-dimensional reconstructed images. For histological verification, specimens were cut into planes similar to the PCCT images, then cut into 5-μm serial sections and stained with Masson's trichrome. In PCCT images of all four of the whole hearts of newborns, the AV conduction axis was distinguished as a low-density structure, which was serially traceable from the compact node to the penetrating bundle within the central fibrous body, and to the branching bundle into the left and right bundle branches. This was verified by histological serial sectioning. This is the first demonstration that visualization of the AV conduction axis within human whole heart specimens is feasible with PCCT. © The Author(s) 2016.
Solar Resource Assessment with Sky Imagery and a Virtual Testbed for Sky Imager Solar Forecasting
NASA Astrophysics Data System (ADS)
Kurtz, Benjamin Bernard
In recent years, ground-based sky imagers have emerged as a promising tool for forecasting solar energy on short time scales (0 to 30 minutes ahead). Following the development of sky imager hardware and algorithms at UC San Diego, we present three new or improved algorithms for sky imager forecasting and forecast evaluation. First, we present an algorithm for measuring irradiance with a sky imager. Sky imager forecasts are often used in conjunction with other instruments for measuring irradiance, so this has the potential to decrease instrumentation costs and logistical complexity. In particular, the forecast algorithm itself often relies on knowledge of the current irradiance which can now be provided directly from the sky images. Irradiance measurements are accurate to within about 10%. Second, we demonstrate a virtual sky imager testbed that can be used for validating and enhancing the forecast algorithm. The testbed uses high-quality (but slow) simulations to produce virtual clouds and sky images. Because virtual cloud locations are known, much more advanced validation procedures are possible with the virtual testbed than with measured data. In this way, we are able to determine that camera geometry and non-uniform evolution of the cloud field are the two largest sources of forecast error. Finally, with the assistance of the virtual sky imager testbed, we develop improvements to the cloud advection model used for forecasting. The new advection schemes are 10-20% better at short time horizons.
Wu, Huawei; Zhang, Qing; Hua, Jia; Hua, Xiaolan; Xu, Jianrong
2013-01-01
Background The aim of this study was to determine the optimal monochromatic spectral CT pulmonary angiography (sCTPA) levels to obtain the highest image quality and diagnostic confidence for pulmonary embolism detection. Methods The Institutional Review Board of the Shanghai Jiao Tong University School of Medicine approved this study, and written informed consent was obtained from all participating patients. Seventy-two patients with pulmonary embolism were scanned with spectral CT mode in the arterial phase. One hundred and one sets of virtual monochromatic spectral (VMS) images were generated ranging from 40 keV to 140 keV. Image noise, clot diameter and clot to artery contrast-to-noise ratio (CNR) from seven sets of VMS images at selected monochromatic levels in sCTPA were measured and compared. Subjective image quality and diagnostic confidence for these images were also assessed and compared. Data were analyzed by paired t test and Wilcoxon rank sum test. Results The lowest noise and the highest image quality score for the VMS images were obtained at 65 keV. The VMS images at 65 keV also had the second highest CNR value behind that of 50 keV VMS images. There was no difference in the mean noise and CNR between the 65 keV and 70 keV VMS images. The apparent clot diameter correlated with the keV levels. Conclusions The optimal energy level for detecting pulmonary embolism using dual-energy spectral CT pulmonary angiography was 65–70 keV. Virtual monochromatic spectral images at approximately 65–70 keV yielded the lowest image noise, high CNR and highest diagnostic confidence for the detection of pulmonary embolism. PMID:23667583
Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.
Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F
2013-09-01
The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.
Minami, Yasunori; Kitai, Satoshi; Kudo, Masatoshi
2012-03-01
Virtual CT sonography using magnetic navigation provides cross sectional images of CT volume data corresponding to the angle of the transducer in the magnetic field in real-time. The purpose of this study was to clarify the value of this virtual CT sonography for treatment response of radiofrequency ablation for hepatocellular carcinoma. Sixty-one patients with 88 HCCs measuring 0.5-1.3 cm (mean±SD, 1.0±0.3 cm) were treated by radiofrequency ablation. For early treatment response, dynamic CT was performed 1-5 days (median, 2 days). We compared early treatment response between axial CT images and multi-angle CT images using virtual CT sonography. Residual tumor stains on axial CT images and multi-angle CT images were detected in 11.4% (10/88) and 13.6% (12/88) after the first session of RFA, respectively (P=0.65). Two patients were diagnosed as showing hyperemia enhancement after the initial radiofrequency ablation on axial CT images and showed local tumor progression shortly because of unnoticed residual tumors. Only virtual CT sonography with magnetic navigation retrospectively showed the residual tumor as circular enhancement. In safety margin analysis, 10 patients were excluded because of residual tumors. The safety margin more than 5 mm by virtual CT sonographic images and transverse CT images were determined in 71.8% (56/78) and 82.1% (64/78), respectively (P=0.13). The safety margin should be overestimated on axial CT images in 8 nodules. Virtual CT sonography with magnetic navigation was useful in evaluating the treatment response of radiofrequency ablation therapy for hepatocellular carcinoma. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Virtual view image synthesis for eye-contact in TV conversation system
NASA Astrophysics Data System (ADS)
Murayama, Daisuke; Kimura, Keiichi; Hosaka, Tadaaki; Hamamoto, Takayuki; Shibuhisa, Nao; Tanaka, Seiichi; Sato, Shunichi; Saito, Sakae
2010-02-01
Eye-contact plays an important role for human communications in the sense that it can convey unspoken information. However, it is highly difficult to realize eye-contact in teleconferencing systems because of camera configurations. Conventional methods to overcome this difficulty mainly resorted to space-consuming optical devices such as half mirrors. In this paper, we propose an alternative approach to achieve eye-contact by techniques of arbitrary view image synthesis. In our method, multiple images captured by real cameras are converted to the virtual viewpoint (the center of the display) by homography, and evaluation of matching errors among these projected images provides the depth map and the virtual image. Furthermore, we also propose a simpler version of this method by using a single camera to save the computational costs, in which the only one real image is transformed to the virtual viewpoint based on the hypothesis that the subject is located at a predetermined distance. In this simple implementation, eye regions are separately generated by comparison with pre-captured frontal face images. Experimental results of both the methods show that the synthesized virtual images enable the eye-contact favorably.
Lee, Jae M; Ku, Jeong H; Jang, Dong P; Kim, Dong H; Choi, Young H; Kim, In Y; Kim, Sun I
2002-06-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology enabled us to use virtual reality (VR) for the treatment of the fear of public speaking. There have been two techniques used to construct a virtual environment for the treatment of the fear of public speaking: model-based and movie-based. Virtual audiences and virtual environments made by model-based technique are unrealistic and unnatural. The movie-based technique has a disadvantage in that each virtual audience cannot be controlled respectively, because all virtual audiences are included in one moving picture file. To address this disadvantage, this paper presents a virtual environment made by using image-based rendering (IBR) and chroma keying simultaneously. IBR enables us to make the virtual environment realistic because the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma keying allows a virtual audience to be controlled individually. In addition, a real-time capture technique was applied in constructing the virtual environment to give the subjects more interaction, in that they can talk with a therapist or another subject.
Development of a virtual speaking simulator using Image Based Rendering.
Lee, J M; Kim, H; Oh, M J; Ku, J H; Jang, D P; Kim, I Y; Kim, S I
2002-01-01
The fear of speaking is often cited as the world's most common social phobia. The rapid growth of computer technology has enabled the use of virtual reality (VR) for the treatment of the fear of public speaking. There are two techniques for building virtual environments for the treatment of this fear: a model-based and a movie-based method. Both methods have the weakness that they are unrealistic and not controllable individually. To understand these disadvantages, this paper presents a virtual environment produced with Image Based Rendering (IBR) and a chroma-key simultaneously. IBR enables the creation of realistic virtual environments where the images are stitched panoramically with the photos taken from a digital camera. And the use of chroma-keys puts virtual audience members under individual control in the environment. In addition, real time capture technique is used in constructing the virtual environments enabling spoken interaction between the subject and a therapist or another subject.
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Wang, Yu-Jen; Chen, Po-Ju; Liang, Xiao; Lin, Yi-Hsin
2017-03-27
Augmented reality (AR), which use computer-aided projected information to augment our sense, has important impact on human life, especially for the elder people. However, there are three major challenges regarding the optical system in the AR system, which are registration, vision correction, and readability under strong ambient light. Here, we solve three challenges simultaneously for the first time using two liquid crystal (LC) lenses and polarizer-free attenuator integrated in optical-see-through AR system. One of the LC lens is used to electrically adjust the position of the projected virtual image which is so-called registration. The other LC lens with larger aperture and polarization independent characteristic is in charge of vision correction, such as myopia and presbyopia. The linearity of lens powers of two LC lenses is also discussed. The readability of virtual images under strong ambient light is solved by electrically switchable transmittance of the LC attenuator originating from light scattering and light absorption. The concept demonstrated in this paper could be further extended to other electro-optical devices as long as the devices exhibit the capability of phase modulations and amplitude modulations.
NASA Astrophysics Data System (ADS)
Liu, T.; Klemperer, S. L.; Yu, C.; Ning, J.
2017-12-01
In the past decades, P wave receiver functions (PRF) have been routinely used to image the Moho, although it is well known that PRFs are susceptible to contamination from sedimentary multiples. Recently, Virtual Deep Seismic Sounding (VDSS) emerged as a novel method to image the Moho. However, despite successful applications of VDSS on multiple datasets from different areas, how sedimentary basins affect the waveforms of post-critical SsPmp, the Moho reflection phase used in VDSS, is not widely understood. Here, motivated by a dataset collected in the Ordos plateau, which shows distinct effects of sedimentary basins on SsPmp and Pms waveforms, we use synthetic seismograms to study the effects of sedimentary basins on SsPmp and Pms, the phases used in VDSS and PRF respectively. The results show that when the sedimentary thickness is on the same order of magnitude as the dominant wavelength of the incident S wave, SsPmp amplitude decreases significantly with S velocity of the sedimentary layer, whereas increasing sedimentary thickness has little effect in SsPmp amplitude. Our explanation is that the low S velocity layer at the virtual source reduces the incident angle of S wave at the free surface, thus decreases the S-to-P reflection coefficient at the virtual source. In addition, transmission loss associated with the bottom of sedimentary basins also contributes to reducing SsPmp amplitude. This explains not only our observations from the Ordos plateau, but also observations from other areas where post-critical SsPmp is expected to be observable, but instead is too weak to be identified. As for Pms, we observe that increasing sedimentary thickness and decreasing sedimentary velocities both can cause interference between sedimentary multiples and Pms, rendering the Moho depths inferred from Pms arrival times unreliable. The reason is that although Pms amplitude does not vary with sedimentary thickness or velocities, as sedimentary velocities decrease and thickness grows, the sedimentary multiples will become stronger and arrive later, and will eventually interfere with Pms. In summary, although both VDSS and PRF are subject to sedimentary effects, when the sedimentary velocity is relatively high, we can still expect VDSS to give reasonable estimations of Moho depths, whereas PRF in such cases might be too noisy to use.
Duan, Xinhui; Arbique, Gary; Guild, Jeffrey; Xi, Yin; Anderson, Jon
2018-05-01
The purpose of this study was to evaluate the quantitative accuracy of spectral images from a detector-based spectral CT scanner using a phantom with iodine-loaded inserts. A 40-cm long-body phantom with seven iodine inserts (2-20 mg/ml of iodine) was used in the study. The inserts could be placed at 5.5 or 10.5 cm from the phantom axis. The phantom was scanned five times for each insert configuration using 120 kVp tube voltage. A set of iodine, virtual noncontrast, effective atomic number, and virtual monoenergetic spectral CT images were generated and measurements were made for all the iodine rods. Measured values were compared with reference values calculated from the chemical composition information provided by the phantom manufacturer. Radiation dose from the spectral CT was compared to a conventional CT using a CTDI (32 cm) phantom. Good agreement between measurements and reference values was achieved for all types of spectral images. The differences ranged from -0.46 to 0.1 mg/ml for iodine concentration, -9.95 to 6.41 HU for virtual noncontrast images, 0.12 to 0.35 for effective Z images, and -17.7 to 55.7 HU for virtual monoenergetic images. For a similar CTDIvol, image noise from the conventional CT was 10% lower than the spectral CT. The detector-based spectral CT can achieve accurate spectral measurements on iodine concentration, virtual non-contrast images, effective atomic numbers, and virtual monoenergetic images. © 2018 American Association of Physicists in Medicine.
Matsushima, Kyoji; Sonobe, Noriaki
2018-01-01
Digitized holography techniques are used to reconstruct three-dimensional (3D) images of physical objects using large-scale computer-generated holograms (CGHs). The object field is captured at three wavelengths over a wide area at high densities. Synthetic aperture techniques using single sensors are used for image capture in phase-shifting digital holography. The captured object field is incorporated into a virtual 3D scene that includes nonphysical objects, e.g., polygon-meshed CG models. The synthetic object field is optically reconstructed as a large-scale full-color CGH using red-green-blue color filters. The CGH has a wide full-parallax viewing zone and reconstructs a deep 3D scene with natural motion parallax.
Reynoso, Exequiel; Capunay, Carlos; Rasumoff, Alejandro; Vallejos, Javier; Carpio, Jimena; Lago, Karen; Carrascosa, Patricia
2016-01-01
The aim of this study was to explore the usefulness of combined virtual monochromatic imaging and metal artifact reduction software (MARS) for the evaluation of musculoskeletal periprosthetic tissue. Measurements were performed in periprosthetic and remote regions in 80 patients using a high-definition scanner. Polychromatic images with and without MARS and virtual monochromatic images were obtained. Periprosthetic polychromatic imaging (PI) showed significant differences compared with remote areas among the 3 tissues explored (P < 0.0001). No significant differences were observed between periprosthetic and remote tissues using monochromatic imaging with MARS (P = 0.053 bone, P = 0.32 soft tissue, and P = 0.13 fat). However, such differences were significant using PI with MARS among bone (P = 0.005) and fat (P = 0.02) tissues. All periprosthetic areas were noninterpretable using PI, compared with 11 (9%) using monochromatic imaging. The combined use of virtual monochromatic imaging and MARS reduced periprosthetic artifacts, achieving attenuation levels comparable to implant-free tissue.
Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment
NASA Astrophysics Data System (ADS)
Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.
2011-09-01
This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.
NASA Astrophysics Data System (ADS)
Hara, Hidetake; Muraishi, Hiroshi; Matsuzawa, Hiroki; Inoue, Toshiyuki; Nakajima, Yasuo; Satoh, Hitoshi; Abe, Shinji
2015-07-01
We have recently developed a phantom that simulates acute ischemic stroke. We attempted to visualize an acute-stage cerebral infarction by using dual-energy Computed tomography (DECT) to obtain virtual monochromatic images of this phantom. Virtual monochromatic images were created by using DECT voltages from 40 to 100 keV in steps of 10 keV and from 60 to 80 keV in steps of 1 keV, under three conditions of the tube voltage with thin (Sn) filters. Calculation of the CNR values allowed us to evaluate the visualization of acute-stage cerebral infarction. The CNR value of a virtual monochromatic image was the highest at 68 keV under 80 kV / Sn 140 kV, at 72 keV under 100 kV / Sn 140 kV, and at 67 keV under 140 kV / 80 kV. The CNR values of virtual monochromatic images at voltages between 65 and 75 keV were significantly higher than those obtained for all other created images. Therefore, the optimal conditions for visualizing acute ischemic stroke were achievable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
HVS: an image-based approach for constructing virtual environments
NASA Astrophysics Data System (ADS)
Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao
1998-09-01
Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Chang, Hsiao‐Han; Lee, Hsiao‐Fei; Sung, Chien‐Cheng; Liao, Tsung‐I
2013-01-01
A frameless radiosurgery system is using a set of thermoplastic mask for fixation and stereoscopic X‐ray imaging for alignment. The accuracy depends on mask fixation and imaging. Under certain circumstances, the guidance images may contain insufficient bony structures, resulting in lesser accuracy. A virtual isocenter function is designed for such scenarios. In this study, we investigated the immobilization and the indications for using virtual isocenter. Twenty‐four arbitrary imaginary treatment targets (ITTs) in phantom were evaluated. The external Localizer with positioner films was used as reference. The alignments by using actual and virtual isocenter in image guidance were compared. The deviation of the alignment after mask removing and then resetting was also checked. The results illustrated that the mean deviation between the alignment by image guidance using actual isocenter (Isoimg) and the localizer(Isoloc) was 2.26mm±1.16mm (standard deviation, SD), 1.66mm±0.83mm for using virtual isocenter. The deviation of the alignment by the image guidance using actual isocenter to the localizer before and after mask resetting was 7.02mm±5.8mm. The deviations before and after mask resetting were insignificant for the target center from skull edge larger than 80 mm on craniocaudal direction. The deviations between the alignment using actual and virtual isocenter in image guidance were not significant if the minimum distance from target center to skull edge was larger or equal to 30 mm. Due to an unacceptable deviation after mask resetting, the image guidance is necessary to improve the accuracy of frameless immobilization. A treatment isocenter less than 30 mm from the skull bone should be an indication for using virtual isocenter to align in image guidance. The virtual isocenter should be set as caudally as possible, and the sella of skull should be the ideal point. PACS numbers: 87.55.kh, 87.55.ne, 87.55.tm PMID:23835379
Kellock, Trenton T; Nicolaou, Savvas; Kim, Sandra S Y; Al-Busaidi, Sultan; Louis, Luck J; O'Connell, Tim W; Ouellette, Hugue A; McLaughlin, Patrick D
2017-09-01
Purpose To quantify the sensitivity and specificity of dual-energy computed tomographic (CT) virtual noncalcium images in the detection of nondisplaced hip fractures and to assess whether obtaining these images as a complement to bone reconstructions alters sensitivity, specificity, or diagnostic confidence. Materials and Methods The clinical research ethics board approved chart review, and the requirement to obtain informed consent was waived. The authors retrospectively identified 118 patients who presented to a level 1 trauma center emergency department and who underwent dual-energy CT for suspicion of a nondisplaced traumatic hip fracture. Clinical follow-up was the standard of reference. Three radiologists interpreted virtual noncalcium images for traumatic bone marrow edema. Bone reconstructions for the same cases were interpreted alone and then with virtual noncalcium images. Diagnostic confidence was rated on a scale of 1 to 10. McNemar, Fleiss κ, and Wilcoxon signed-rank tests were used for statistical analysis. Results Twenty-two patients had nondisplaced hip fractures and 96 did not have hip fractures. Sensitivity with virtual noncalcium images was 77% and 91% (17 and 20 of 22 patients), and specificity was 92%-99% (89-95 of 96 patients). Sensitivity increased by 4%-5% over that with bone reconstruction images alone for two of the three readers when both bone reconstruction and virtual noncalcium images were used. Specificity remained unchanged (99% and 100%). Diagnostic confidence in the exclusion of fracture was improved with combined bone reconstruction and virtual noncalcium images (median score: 10, 9, and 10 for readers 1, 2, and 3, respectively) compared with bone reconstruction images alone (median score: 9, 8, and 9). Conclusion When used as a supplement to standard bone reconstructions, dual-energy CT virtual noncalcium images increased sensitivity for the detection of nondisplaced traumatic hip fractures and improved diagnostic confidence in the exclusion of these fractures. © RSNA, 2017 Online supplemental material is available for this article. An earlier incorrect version of this article appeared online. This article was corrected on March 17, 2017.
Two-photon calcium imaging in mice navigating a virtual reality environment.
Leinweber, Marcus; Zmarz, Pawel; Buchmann, Peter; Argast, Paul; Hübener, Mark; Bonhoeffer, Tobias; Keller, Georg B
2014-02-20
In recent years, two-photon imaging has become an invaluable tool in neuroscience, as it allows for chronic measurement of the activity of genetically identified cells during behavior(1-6). Here we describe methods to perform two-photon imaging in mouse cortex while the animal navigates a virtual reality environment. We focus on the aspects of the experimental procedures that are key to imaging in a behaving animal in a brightly lit virtual environment. The key problems that arise in this experimental setup that we here address are: minimizing brain motion related artifacts, minimizing light leak from the virtual reality projection system, and minimizing laser induced tissue damage. We also provide sample software to control the virtual reality environment and to do pupil tracking. With these procedures and resources it should be possible to convert a conventional two-photon microscope for use in behaving mice.
Virtually distortion-free imaging system for large field, high resolution lithography
Hawryluk, A.M.; Ceglio, N.M.
1993-01-05
Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.
Virtually distortion-free imaging system for large field, high resolution lithography
Hawryluk, Andrew M.; Ceglio, Natale M.
1993-01-01
Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position.
A coherent through-wall MIMO phased array imaging radar based on time-duplexed switching
NASA Astrophysics Data System (ADS)
Chen, Qingchao; Chetty, Kevin; Brennan, Paul; Lok, Lai Bun; Ritchie, Matthiew; Woodbridge, Karl
2017-05-01
Through-the-Wall (TW) radar sensors are gaining increasing interest for security, surveillance and search and rescue applications. Additionally, the integration of Multiple-Input, Multiple-Output (MIMO) techniques with phased array radar is allowing higher performance at lower cost. In this paper we present a 4-by-4 TW MIMO phased array imaging radar operating at 2.4 GHz with 200 MHz bandwidth. To achieve high imaging resolution in a cost-effective manner, the 4 Tx and 4 Rx elements are used to synthesize a uniform linear array (ULA) of 16 virtual elements. Furthermore, the transmitter is based on a single-channel 4-element time-multiplexed switched array. In transmission, the radar utilizes frequency modulated continuous wave (FMCW) waveforms that undergo de-ramping on receive to allow digitization at relatively low sampling rates, which then simplifies the imaging process. This architecture has been designed for the short-range TW scenarios envisaged, and permits sufficient time to switch between antenna elements. The paper first outlines the system characteristics before describing the key signal processing and imaging algorithms which are based on traditional Fast Fourier Transform (FFT) processing. These techniques are implemented in LabVIEW software. Finally, we report results from an experimental campaign that investigated the imaging capabilities of the system and demonstrated the detection of personnel targets. Moreover, we show that multiple targets within a room with greater than approximately 1 meter separation can be distinguished from one another.
Real-time interactive virtual tour on the World Wide Web (WWW)
NASA Astrophysics Data System (ADS)
Yoon, Sanghyuk; Chen, Hai-jung; Hsu, Tom; Yoon, Ilmi
2003-12-01
Web-based Virtual Tour has become a desirable and demanded application, yet challenging due to the nature of web application's running environment such as limited bandwidth and no guarantee of high computation power on the client side. Image-based rendering approach has attractive advantages over traditional 3D rendering approach in such Web Applications. Traditional approach, such as VRML, requires labor-intensive 3D modeling process, high bandwidth and computation power especially for photo-realistic virtual scenes. QuickTime VR and IPIX as examples of image-based approach, use panoramic photos and the virtual scenes that can be generated from photos directly skipping the modeling process. But, these image-based approaches may require special cameras or effort to take panoramic views and provide only one fixed-point look-around and zooming in-out rather than 'walk around', that is a very important feature to provide immersive experience to virtual tourists. The Web-based Virtual Tour using Tour into the Picture employs pseudo 3D geometry with image-based rendering approach to provide viewers with immersive experience of walking around the virtual space with several snap shots of conventional photos.
Screening of a virtual mirror-image library of natural products.
Noguchi, Taro; Oishi, Shinya; Honda, Kaori; Kondoh, Yasumitsu; Saito, Tamio; Ohno, Hiroaki; Osada, Hiroyuki; Fujii, Nobutaka
2016-06-08
We established a facile access to an unexplored mirror-image library of chiral natural product derivatives using d-protein technology. In this process, two chemical syntheses of mirror-image substances including a target protein and hit compound(s) allow the lead discovery from a virtual mirror-image library without the synthesis of numerous mirror-image compounds.
Virtual Oscillator Controls | Grid Modernization | NREL
Virtual Oscillator Controls Virtual Oscillator Controls NREL is developing virtual oscillator Santa-Barbara, and SunPower. Publications Synthesizing Virtual Oscillators To Control Islanded Inverters Synchronization of Parallel Single-Phase Inverters Using Virtual Oscillator Control, IEEE Transactions on Power
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. PMID:27843356
Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis
2016-01-01
Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.
Applications and challenges of digital pathology and whole slide imaging.
Higgins, C
2015-07-01
Virtual microscopy is a method for digitizing images of tissue on glass slides and using a computer to view, navigate, change magnification, focus and mark areas of interest. Virtual microscope systems (also called digital pathology or whole slide imaging systems) offer several advantages for biological scientists who use slides as part of their general, pharmaceutical, biotechnology or clinical research. The systems usually are based on one of two methodologies: area scanning or line scanning. Virtual microscope systems enable automatic sample detection, virtual-Z acquisition and creation of focal maps. Virtual slides are layered with multiple resolutions at each location, including the highest resolution needed to allow more detailed review of specific regions of interest. Scans may be acquired at 2, 10, 20, 40, 60 and 100 × or a combination of magnifications to highlight important detail. Digital microscopy starts when a slide collection is put into an automated or manual scanning system. The original slides are archived, then a server allows users to review multilayer digital images of the captured slides either by a closed network or by the internet. One challenge for adopting the technology is the lack of a universally accepted file format for virtual slides. Additional challenges include maintaining focus in an uneven sample, detecting specimens accurately, maximizing color fidelity with optimal brightness and contrast, optimizing resolution and keeping the images artifact-free. There are several manufacturers in the field and each has not only its own approach to these issues, but also its own image analysis software, which provides many options for users to enhance the speed, quality and accuracy of their process through virtual microscopy. Virtual microscope systems are widely used and are trusted to provide high quality solutions for teleconsultation, education, quality control, archiving, veterinary medicine, research and other fields.
Sakakibara, Shunsuke; Onishi, Hiroyuki; Hashikawa, Kazunobu; Akashi, Masaya; Sakakibara, Akiko; Nomura, Tadashi; Terashi, Hiroto
2015-05-01
Most free flap reconstruction complications involve vascular compromise. Evaluation of vascular anatomy provides considerable information that can potentially minimize these complications. Previous reports have shown that contrast-enhanced computed tomography is effective for understanding three-dimensional arterial anatomy. However, most vascular complications result from venous thromboses, making imaging of venous anatomy highly desirable. The phase-lag computed tomography angiography (pl-CTA) technique involves 64-channel (virtually, 128-channel) multidetector CT and is used to acquire arterial images using conventional CTA. Venous images are three-dimensionally reconstructed using a subtraction technique involving combined venous phase and arterial phase images, using a computer workstation. This technique was used to examine 48 patients (12 lower leg reconstructions, 34 head and neck reconstructions, and 2 upper extremity reconstructions) without complications. The pl-CTA technique can be used for three-dimensional visualization of peripheral veins measuring approximately 1 mm in diameter. The pl-CTA information was especially helpful for secondary free flap reconstructions in the head and neck region after malignant tumor recurrence. In such cases, radical dissection of the neck was performed as part of the first operation, and many vessels, including veins, were resected and used in the first free-tissue transfer. The pl-CTA images also allowed visualization of varicose changes in the lower leg region and helped us avoid selecting those vessels for anastomosis. Thus, the pl-CTA-derived venous anatomy information was useful for exact evaluations during the planning of free-tissue transfers. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Leng, Shuai; Yu, Lifeng; Fletcher, Joel G; McCollough, Cynthia H
2015-08-01
To determine the iodine contrast-to-noise ratio (CNR) for abdominal computed tomography (CT) when using energy domain noise reduction and virtual monoenergetic dual-energy (DE) CT images and to compare the CNR to that attained with single-energy CT at 80, 100, 120, and 140 kV. This HIPAA-compliant study was approved by the institutional review board with waiver of informed consent. A syringe filled with diluted iodine contrast material was placed into 30-, 35-, and 45-cm-wide water phantoms and scanned with a dual-source CT scanner in both DE and single-energy modes with matched scanner output. Virtual monoenergetic images were generated, with energies ranging from 40 to 110 keV in 10-keV steps. A previously developed energy domain noise reduction algorithm was applied to reduce image noise by exploiting information redundancies in the energy domain. Image noise and iodine CNR were calculated. To show the potential clinical benefit of this technique, it was retrospectively applied to a clinical DE CT study of the liver in a 59-year-old male patient by using conventional and iterative reconstruction techniques. Image noise and CNR were compared for virtual monoenergetic images with and without energy domain noise reduction at each virtual monoenergetic energy (in kiloelectron volts) and phantom size by using a paired t test. CNR of virtual monoenergetic images was also compared with that of single-energy images acquired with 80, 100, 120, and 140 kV. Noise reduction of up to 59% (28.7 of 65.7) was achieved for DE virtual monoenergetic images by using an energy domain noise reduction technique. For the commercial virtual monoenergetic images, the maximum iodine CNR was achieved at 70 keV and was 18.6, 16.6, and 10.8 for the 30-, 35-, and 45-cm phantoms. After energy domain noise reduction, maximum iodine CNR was achieved at 40 keV and increased to 30.6, 25.4, and 16.5. These CNRs represented improvement of up to 64% (12.0 of 18.6) with the energy domain noise reduction technique. For single-energy CT at the optimal tube potential, iodine CNR was 29.1 (80 kV), 21.2 (80 kV), and 11.5 (100 kV). For patient images, 39% (24 of 61) noise reduction and 67% (0.74 of 1.10) CNR improvement were observed with the energy domain noise reduction technique when compared with standard filtered back-projection images. Iodine CNR for adult abdominal CT may be maximized with energy domain noise reduction and virtual monoenergetic DE CT. (©) RSNA, 2015.
Van Es, Simone L; Kumar, Rakesh K; Pryor, Wendy M; Salisbury, Elizabeth L; Velan, Gary M
2015-09-01
To determine whether cytopathology whole slide images and virtual microscopy adaptive tutorials aid learning by postgraduate trainees, we designed a randomized crossover trial to evaluate the quantitative and qualitative impact of whole slide images and virtual microscopy adaptive tutorials compared with traditional glass slide and textbook methods of learning cytopathology. Forty-three anatomical pathology registrars were recruited from Australia, New Zealand, and Malaysia. Online assessments were used to determine efficacy, whereas user experience and perceptions of efficiency were evaluated using online Likert scales and open-ended questions. Outcomes of online assessments indicated that, with respect to performance, learning with whole slide images and virtual microscopy adaptive tutorials was equivalent to using traditional methods. High-impact learning, efficiency, and equity of learning from virtual microscopy adaptive tutorials were strong themes identified in open-ended responses. Participants raised concern about the lack of z-axis capability in the cytopathology whole slide images, suggesting that delivery of z-stacked whole slide images online may be important for future educational development. In this trial, learning cytopathology with whole slide images and virtual microscopy adaptive tutorials was found to be as effective as and perceived as more efficient than learning from glass slides and textbooks. The use of whole slide images and virtual microscopy adaptive tutorials has the potential to provide equitable access to effective learning from teaching material of consistently high quality. It also has broader implications for continuing professional development and maintenance of competence and quality assurance in specialist practice. Copyright © 2015 Elsevier Inc. All rights reserved.
Real and Virtual Images Using a Classroom Hologram.
ERIC Educational Resources Information Center
Olson, Dale W.
1992-01-01
Describes the design and fabrication of a classroom hologram and activities utilizing the hologram to teach the concepts of real and virtual images to high school and introductory college students. Contrasts this method with three other approaches to teach about images. (MDH)
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-02-15
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the "integrated image" on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications.
Stereo 3D vision adapter using commercial DIY goods
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Ohara, Takashi
2009-10-01
The conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. Meanwhile the mirror supplies us with the same image but this mirror image is usually upside down. Assume that the images on an original screen and a virtual screen in the mirror are completely different and both images can be displayed independently. It would be possible to enlarge a screen area twice. This extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. Although the displaying region is doubled, this virtual display could not produce 3D images. In this paper, we present an extension method using a unidirectional diffusing image screen and an improvement for displaying a 3D image using orthogonal polarized image projection.
The HEPiX Virtualisation Working Group: Towards a Grid of Clouds
NASA Astrophysics Data System (ADS)
Cass, Tony
2012-12-01
The use of virtual machine images, as for example with Cloud services such as Amazon's Elastic Compute Cloud, is attractive for users as they have a guaranteed execution environment, something that cannot today be provided across sites participating in computing grids such as the Worldwide LHC Computing Grid. However, Grid sites often operate within computer security frameworks which preclude the use of remotely generated images. The HEPiX Virtualisation Working Group was setup with the objective to enable use of remotely generated virtual machine images at Grid sites and, to this end, has introduced the idea of trusted virtual machine images which are guaranteed to be secure and configurable by sites such that security policy commitments can be met. This paper describes the requirements and details of these trusted virtual machine images and presents a model for their use to facilitate the integration of Grid- and Cloud-based computing environments for High Energy Physics.
Imaging Basin Structure with Teleseismic Virtual Source Reflection Profiles
NASA Astrophysics Data System (ADS)
Yang, Z.; Sheehan, A. F.; Yeck, W. L.; Miller, K. C.; Worthington, L. L.; Erslev, E.; Harder, S. H.; Anderson, M. L.; Siddoway, C. S.
2011-12-01
We demonstrate a case of using teleseisms recorded on single channel high frequency geophones to image upper crustal structure across the Bighorn Arch in north-central Wyoming. The dataset was obtained through the EarthScope FlexArray Bighorn Arch Seismic Experiment (BASE). In addition to traditional active and passive source seismic data acquisition, BASE included a 12 day continuous (passive source) deployment of 850 geophones with 'Texan' dataloggers. The geophones were deployed in three E-W lines in north-central Wyoming extending from the Powder River Basin across the Bighorn Mountains and across the Bighorn Basin, and two N-S lines on east and west flanks of the Bighorn Mountains. The station interval is roughly 1.5-2 km, good for imaging coherent shallow structures. The approach used in this study uses the surface reflection as virtual seismic source and reverberated teleseismic P-wave phase (PpPdp) (teleseismic P-wave reflected at receiver side free surface and then reflected off crustal seismic interface) to construct seismic profiles. These profiles are equivalent to conventional active source seismic reflection profiles except that high-frequency (up to 2.4 Hz) transmitted wave fields from distant earthquakes are used as sources. On the constructed seismic profiles, the coherent PpPdp phases beneath Powder River and Bighorn Basins are distinct after the source wavelet is removed from the seismograms by deconvolution. Under the Bighorn Arch, no clear coherent signals are observed. We combine phases PpPdp and Ps to constrain the averaged Vp/Vs: 2.05-2.15 for the Powder River Basin and 1.9-2.0 for the Bighorn Basin. These high Vp/Vs ratios suggest that the layers within which P-wave reverberates are sedimentary. Assuming Vp as 4 km/s under the Powder River Basin, the estimated thickness of sedimentary layer above reflection below the profile is 3-4.5 km, consistent with the depth of the top of the Tensleep Fm. Therefore we interpret the coherent PpPdp phases about 1-3 s after direct P-wave arrival as the reflections off the interface between the Paleozoic carbonates/sandstones and Mesozoic shales.
Reconfigurable and responsive droplet-based compound micro-lenses.
Nagelberg, Sara; Zarzar, Lauren D; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M; Kolle, Mathias
2017-03-07
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications-integral micro-scale imaging devices and light field display technology-thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses.
Reconfigurable and responsive droplet-based compound micro-lenses
Nagelberg, Sara; Zarzar, Lauren D.; Nicolas, Natalie; Subramanian, Kaushikaram; Kalow, Julia A.; Sresht, Vishnu; Blankschtein, Daniel; Barbastathis, George; Kreysing, Moritz; Swager, Timothy M.; Kolle, Mathias
2017-01-01
Micro-scale optical components play a crucial role in imaging and display technology, biosensing, beam shaping, optical switching, wavefront-analysis, and device miniaturization. Herein, we demonstrate liquid compound micro-lenses with dynamically tunable focal lengths. We employ bi-phase emulsion droplets fabricated from immiscible hydrocarbon and fluorocarbon liquids to form responsive micro-lenses that can be reconfigured to focus or scatter light, form real or virtual images, and display variable focal lengths. Experimental demonstrations of dynamic refractive control are complemented by theoretical analysis and wave-optical modelling. Additionally, we provide evidence of the micro-lenses' functionality for two potential applications—integral micro-scale imaging devices and light field display technology—thereby demonstrating both the fundamental characteristics and the promising opportunities for fluid-based dynamic refractive micro-scale compound lenses. PMID:28266505
Lu, Min; Wang, Shengjia; Aulbach, Laura; Koch, Alexander W
2016-08-01
This paper suggests the use of adjustable aperture multiplexing (AAM), a method which is able to introduce multiple tunable carrier frequencies into a three-beam electronic speckle pattern interferometer to measure the out-of-plane displacement and its first-order derivative simultaneously. In the optical arrangement, two single apertures are located in the object and reference light paths, respectively. In cooperation with two adjustable mirrors, virtual images of the single apertures construct three pairs of virtual double apertures with variable aperture opening sizes and aperture distances. By setting the aperture parameter properly, three tunable spatial carrier frequencies are produced within the speckle pattern and completely separate the information of three interferograms in the frequency domain. By applying the inverse Fourier transform to a selected spectrum, its corresponding phase difference distribution can thus be evaluated. Therefore, we can obtain the phase map due to the deformation as well as its slope of the test surface from two speckle patterns which are recorded at different loading events. By this means, simultaneous and dynamic measurements are realized. AAM has greatly simplified the measurement system, which contributes to improving the system stability and increasing the system flexibility and adaptability to various measurement requirements. This paper presents the AAM working principle, the phase retrieval using spatial carrier frequency, and preliminary experimental results.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A; Kagemann, Larry; Schuman, Joel S
2016-01-01
Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t -test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects.
Chen, Chieh-Li; Ishikawa, Hiroshi; Wollstein, Gadi; Bilonick, Richard A.; Kagemann, Larry; Schuman, Joel S.
2016-01-01
Purpose Developing a novel image enhancement method so that nonframe-averaged optical coherence tomography (OCT) images become comparable to active eye-tracking frame-averaged OCT images. Methods Twenty-one eyes of 21 healthy volunteers were scanned with noneye-tracking nonframe-averaged OCT device and active eye-tracking frame-averaged OCT device. Virtual averaging was applied to nonframe-averaged images with voxel resampling and adding amplitude deviation with 15-time repetitions. Signal-to-noise (SNR), contrast-to-noise ratios (CNR), and the distance between the end of visible nasal retinal nerve fiber layer (RNFL) and the foveola were assessed to evaluate the image enhancement effect and retinal layer visibility. Retinal thicknesses before and after processing were also measured. Results All virtual-averaged nonframe-averaged images showed notable improvement and clear resemblance to active eye-tracking frame-averaged images. Signal-to-noise and CNR were significantly improved (SNR: 30.5 vs. 47.6 dB, CNR: 4.4 vs. 6.4 dB, original versus processed, P < 0.0001, paired t-test). The distance between the end of visible nasal RNFL and the foveola was significantly different before (681.4 vs. 446.5 μm, Cirrus versus Spectralis, P < 0.0001) but not after processing (442.9 vs. 446.5 μm, P = 0.76). Sectoral macular total retinal and circumpapillary RNFL thicknesses showed systematic differences between Cirrus and Spectralis that became not significant after processing. Conclusion The virtual averaging method successfully improved nontracking nonframe-averaged OCT image quality and made the images comparable to active eye-tracking frame-averaged OCT images. Translational Relevance Virtual averaging may enable detailed retinal structure studies on images acquired using a mixture of nonframe-averaged and frame-averaged OCT devices without concerning about systematic differences in both qualitative and quantitative aspects. PMID:26835180
Chao, Coline; Chalouhi, Gihad E; Bouhanna, Philippe; Ville, Yves; Dommergues, Marc
2015-09-01
To compare the impact of virtual reality simulation training and theoretical teaching on the ability of inexperienced trainees to produce adequate virtual transvaginal ultrasound images. We conducted a randomized controlled trial with parallel groups. Participants included inexperienced residents starting a training program in Paris. The intervention consisted of 40 minutes of virtual reality simulation training using a haptic transvaginal simulator versus 40 minutes of conventional teaching including a conference with slides and videos and answers to the students' questions. The outcome was a 19-point image quality score calculated from a set of 4 images (sagittal and coronal views of the uterus and left and right ovaries) produced by trainees immediately after the intervention, using the same simulator on which a new virtual patient had been uploaded. Experts assessed the outcome on stored images, presented in a random order, 2 months after the trial was completed. They were blinded to group assignment. The hypothesis was an improved outcome in the intervention group. Randomization was 1 to 1. The mean score was significantly greater in the simulation group (n = 16; mean score, 12; SEM, 0.8) than the control group (n = 18; mean score, 9; SEM, 1.0; P= .0302). The quality of virtual vaginal images produced by inexperienced trainees was greater immediately after a single virtual reality simulation training session than after a single theoretical teaching session. © 2015 by the American Institute of Ultrasound in Medicine.
Hologram-reconstruction signal enhancement
NASA Technical Reports Server (NTRS)
Mezrich, R. S.
1977-01-01
Principle of heterodyne detection is used to combine object beam and reconstructed virtual image beam. All light valves in page composer are opened, and virtual-image beam is allowed to interfere with light from valves.
NASA Astrophysics Data System (ADS)
Tsoupikova, Daria
2006-02-01
This paper will explore how the aesthetics of the virtual world affects, transforms, and enhances the immersive emotional experience of the user. What we see and what we do upon entering the virtual environment influences our feelings, mental state, physiological changes and sensibility. To create a unique virtual experience the important component to design is the beauty of the virtual world based on the aesthetics of the graphical objects such as textures, models, animation, and special effects. The aesthetic potency of the images that comprise the virtual environment can make the immersive experience much stronger and more compelling. The aesthetic qualities of the virtual world as born out through images and graphics can influence the user's state of mind. Particular changes and effects on the user can be induced through the application of techniques derived from the research fields of psychology, anthropology, biology, color theory, education, art therapy, music, and art history. Many contemporary artists and developers derive much inspiration for their work from their experience with traditional arts such as painting, sculpture, design, architecture and music. This knowledge helps them create a higher quality of images and stereo graphics in the virtual world. The understanding of the close relation between the aesthetic quality of the virtual environment and the resulting human perception is the key to developing an impressive virtual experience.
Applications of virtual reality technology in pathology.
Grimes, G J; McClellan, S A; Goldman, J; Vaughn, G L; Conner, D A; Kujawski, E; McDonald, J; Winokur, T; Fleming, W
1997-01-01
TelePath(SM) a telerobotic system utilizing virtual microscope concepts based on high quality still digital imaging and aimed at real-time support for surgery by remote diagnosis of frozen sections. Many hospitals and clinics have an application for the remote practice of pathology, particularly in the area of reading frozen sections in support of surgery, commonly called anatomic pathology. The goal is to project the expertise of the pathologist into the remote setting by giving the pathologist access to the microscope slides with an image quality and human interface comparable to what the pathologist would experience at a real rather than a virtual microscope. A working prototype of a virtual microscope has been defined and constructed which has the needed performance in both the image quality and human interface areas for a pathologist to work remotely. This is accomplished through the use of telerobotics and an image quality which provides the virtual microscope the same diagnostic capabilities as a real microscope. The examination of frozen sections is performed a two-dimensional world. The remote pathologist is in a virtual world with the same capabilities as a "real" microscope, but response times may be slower depending on the specific computing and telecommunication environments. The TelePath system has capabilities far beyond a normal biological microscope, such as the ability to create a low power image of the entire sample using multiple images digitally matched together; the ability to digitally retrace a viewing trajectory; and the ability to archive images using CD ROM and other mass storage devices.
Electron holography—basics and applications
NASA Astrophysics Data System (ADS)
Lichte, Hannes; Lehmann, Michael
2008-01-01
Despite the huge progress achieved recently by means of the corrector for aberrations, allowing now a true atomic resolution of 0.1 nm, hence making it an unrivalled tool for nanoscience, transmission electron microscopy (TEM) suffers from a severe drawback: in a conventional electron micrograph only a poor phase contrast can be achieved, i.e. phase structures are virtually invisible. Therefore, conventional TEM is nearly blind for electric and magnetic fields, which are pure phase objects. Since such fields provoked by the atomic structure, e.g. of semiconductors and ferroelectrics, largely determine the solid state properties, hence the importance for high technology applications, substantial object information is missing. Electron holography in TEM offers the solution: by superposition with a coherent reference wave, a hologram is recorded, from which the image wave can be completely reconstructed by amplitude and phase. Now the object is displayed quantitatively in two separate images: one representing the amplitude, the other the phase. From the phase image, electric and magnetic fields can be determined quantitatively in the range from micrometre down to atomic dimensions by all wave optical methods that one can think of, both in real space and in Fourier space. Electron holography is pure wave optics. Therefore, we discuss the basics of coherence and interference, the implementation into a TEM, the path of rays for recording holograms as well as the limits in lateral and signal resolution. We outline the methods of reconstructing the wave by numerical image processing and procedures for extracting the object properties of interest. Furthermore, we present a broad spectrum of applications both at mesoscopic and atomic dimensions. This paper gives an overview of the state of the art pointing at the needs for further development. It is also meant as encouragement for those who refrain from holography, thinking that it can only be performed by specialists in highly specialized laboratories. In fact, a modern TEM built for atomic resolution and equipped with a field emitter or a Schottky emitter, well aligned by a skilled operator, can deliver good holograms. Running commercially available image processing software and mathematics programs on a laptop-computer is sufficient for reconstruction of the amplitude and phase images and extracting desirable object information.
Ray Tracing with Virtual Objects.
ERIC Educational Resources Information Center
Leinoff, Stuart
1991-01-01
Introduces the method of ray tracing to analyze the refraction or reflection of real or virtual images from multiple optical devices. Discusses ray-tracing techniques for locating images using convex and concave lenses or mirrors. (MDH)
An Online Image Analysis Tool for Science Education
ERIC Educational Resources Information Center
Raeside, L.; Busschots, B.; Waddington, S.; Keating, J. G.
2008-01-01
This paper describes an online image analysis tool developed as part of an iterative, user-centered development of an online Virtual Learning Environment (VLE) called the Education through Virtual Experience (EVE) Portal. The VLE provides a Web portal through which schoolchildren and their teachers create scientific proposals, retrieve images and…
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-09-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-01-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399
Fortmeier, Dirk; Mastmeyer, Andre; Schröder, Julian; Handels, Heinz
2016-01-01
This study presents a new visuo-haptic virtual reality (VR) training and planning system for percutaneous transhepatic cholangio-drainage (PTCD) based on partially segmented virtual patient models. We only use partially segmented image data instead of a full segmentation and circumvent the necessity of surface or volume mesh models. Haptic interaction with the virtual patient during virtual palpation, ultrasound probing and needle insertion is provided. Furthermore, the VR simulator includes X-ray and ultrasound simulation for image-guided training. The visualization techniques are GPU-accelerated by implementation in Cuda and include real-time volume deformations computed on the grid of the image data. Computation on the image grid enables straightforward integration of the deformed image data into the visualization components. To provide shorter rendering times, the performance of the volume deformation algorithm is improved by a multigrid approach. To evaluate the VR training system, a user evaluation has been performed and deformation algorithms are analyzed in terms of convergence speed with respect to a fully converged solution. The user evaluation shows positive results with increased user confidence after a training session. It is shown that using partially segmented patient data and direct volume rendering is suitable for the simulation of needle insertion procedures such as PTCD.
Zhu, Ming; Liu, Fei; Chai, Gang; Pan, Jun J.; Jiang, Taoran; Lin, Li; Xin, Yu; Zhang, Yan; Li, Qingfeng
2017-01-01
Augmented reality systems can combine virtual images with a real environment to ensure accurate surgery with lower risk. This study aimed to develop a novel registration and tracking technique to establish a navigation system based on augmented reality for maxillofacial surgery. Specifically, a virtual image is reconstructed from CT data using 3D software. The real environment is tracked by the augmented reality (AR) software. The novel registration strategy that we created uses an occlusal splint compounded with a fiducial marker (OSM) to establish a relationship between the virtual image and the real object. After the fiducial marker is recognized, the virtual image is superimposed onto the real environment, forming the “integrated image” on semi-transparent glass. Via the registration process, the integral image, which combines the virtual image with the real scene, is successfully presented on the semi-transparent helmet. The position error of this navigation system is 0.96 ± 0.51 mm. This augmented reality system was applied in the clinic and good surgical outcomes were obtained. The augmented reality system that we established for maxillofacial surgery has the advantages of easy manipulation and high accuracy, which can improve surgical outcomes. Thus, this system exhibits significant potential in clinical applications. PMID:28198442
Ni, Qian Qian; Tang, Chun Xiang; Zhao, Yan E; Zhou, Chang Sheng; Chen, Guo Zhong; Lu, Guang Ming; Zhang, Long Jiang
2016-05-25
Aneurysmal subarachnoid hemorrhages have extremely high case fatality in clinic. Early and rapid identifications of ruptured intracranial aneurysms seem to be especially important. Here we evaluate clinical value of single phase contrast-enhanced dual-energy CT angiograph (DE-CTA) as a one-stop-shop tool in detecting aneurysmal subarachnoid hemorrhage. One hundred and five patients who underwent true non-enhanced CT (TNCT), contrast-enhanced DE-CTA and digital subtraction angiography (DSA) were included. Image quality and detectability of intracranial hemorrhage were evaluated and compared between virtual non-enhanced CT (VNCT) images reconstructed from DE-CTA and TNCT. There was no statistical difference in image quality (P > 0.05) between VNCT and TNCT. The agreement of VNCT and TNCT in detecting intracranial hemorrhage reached 98.1% on a per-patient basis. With DSA as reference standard, sensitivity and specificity on a per-patient were 98.3% and 97.9% for DE-CTA in intracranial aneurysm detection. Effective dose of DE-CTA was reduced by 75.0% compared to conventional digital subtraction CTA. Thus, single phase contrast-enhanced DE-CTA is optimal reliable one-stop-shop tool for detecting intracranial hemorrhage with VNCT and intracranial aneurysms with DE-CTA with substantial radiation dose reduction compared with conventional digital subtraction CTA.
Ni, Qian Qian; Tang, Chun Xiang; Zhao, Yan E; Zhou, Chang Sheng; Chen, Guo Zhong; Lu, Guang Ming; Zhang, Long Jiang
2016-01-01
Aneurysmal subarachnoid hemorrhages have extremely high case fatality in clinic. Early and rapid identifications of ruptured intracranial aneurysms seem to be especially important. Here we evaluate clinical value of single phase contrast-enhanced dual-energy CT angiograph (DE-CTA) as a one-stop-shop tool in detecting aneurysmal subarachnoid hemorrhage. One hundred and five patients who underwent true non-enhanced CT (TNCT), contrast-enhanced DE-CTA and digital subtraction angiography (DSA) were included. Image quality and detectability of intracranial hemorrhage were evaluated and compared between virtual non-enhanced CT (VNCT) images reconstructed from DE-CTA and TNCT. There was no statistical difference in image quality (P > 0.05) between VNCT and TNCT. The agreement of VNCT and TNCT in detecting intracranial hemorrhage reached 98.1% on a per-patient basis. With DSA as reference standard, sensitivity and specificity on a per-patient were 98.3% and 97.9% for DE-CTA in intracranial aneurysm detection. Effective dose of DE-CTA was reduced by 75.0% compared to conventional digital subtraction CTA. Thus, single phase contrast-enhanced DE-CTA is optimal reliable one-stop-shop tool for detecting intracranial hemorrhage with VNCT and intracranial aneurysms with DE-CTA with substantial radiation dose reduction compared with conventional digital subtraction CTA. PMID:27222163
Johnson, Jeffrey P; Krupinski, Elizabeth A; Yan, Michelle; Roehrig, Hans; Graham, Anna R; Weinstein, Ronald S
2011-02-01
A major issue in telepathology is the extremely large and growing size of digitized "virtual" slides, which can require several gigabytes of storage and cause significant delays in data transmission for remote image interpretation and interactive visualization by pathologists. Compression can reduce this massive amount of virtual slide data, but reversible (lossless) methods limit data reduction to less than 50%, while lossy compression can degrade image quality and diagnostic accuracy. "Visually lossless" compression offers the potential for using higher compression levels without noticeable artifacts, but requires a rate-control strategy that adapts to image content and loss visibility. We investigated the utility of a visual discrimination model (VDM) and other distortion metrics for predicting JPEG 2000 bit rates corresponding to visually lossless compression of virtual slides for breast biopsy specimens. Threshold bit rates were determined experimentally with human observers for a variety of tissue regions cropped from virtual slides. For test images compressed to their visually lossless thresholds, just-noticeable difference (JND) metrics computed by the VDM were nearly constant at the 95th percentile level or higher, and were significantly less variable than peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) metrics. Our results suggest that VDM metrics could be used to guide the compression of virtual slides to achieve visually lossless compression while providing 5-12 times the data reduction of reversible methods.
A versatile stereoscopic visual display system for vestibular and oculomotor research.
Kramer, P D; Roberts, D C; Shelhamer, M; Zee, D S
1998-01-01
Testing of the vestibular system requires a vestibular stimulus (motion) and/or a visual stimulus. We have developed a versatile, low cost, stereoscopic visual display system, using "virtual reality" (VR) technology. The display system can produce images for each eye that correspond to targets at any virtual distance relative to the subject, and so require the appropriate ocular vergence. We elicited smooth pursuit, "stare" optokinetic nystagmus (OKN) and after-nystagmus (OKAN), vergence for targets at various distances, and short-term adaptation of the vestibulo-ocular reflex (VOR), using both conventional methods and the stereoscopic display. Pursuit, OKN, and OKAN were comparable with both methods. When used with a vestibular stimulus, VR induced appropriate adaptive changes of the phase and gain of the angular VOR. In addition, using the VR display system and a human linear acceleration sled, we adapted the phase of the linear VOR. The VR-based stimulus system not only offers an alternative to more cumbersome means of stimulating the visual system in vestibular experiments, it also can produce visual stimuli that would otherwise be impractical or impossible. Our techniques provide images without the latencies encountered in most VR systems. Its inherent versatility allows it to be useful in several different types of experiments, and because it is software driven it can be quickly adapted to provide a new stimulus. These two factors allow VR to provide considerable savings in time and money, as well as flexibility in developing experimental paradigms.
Soot and liquid-phase fuel distributions in a newly designed optically accessible DI diesel engine
NASA Astrophysics Data System (ADS)
Dec, J. E.; Espey, C.
1993-10-01
Two-dimensional (2-D) laser-sheet imaging has been used to examine the soot and liquid-phase fuel distributions in a newly designed, optically accessible, direct-injection diesel engine of the heavy-duty size class. The design of this engine preserves the intake port geometry and basic dimensions of a Cummins N-series production engine. It also includes several unique features to provide considerable optical access. Liquid-phase fuel and soot distribution studies were conducted at a medium speed (1,200 rpm) using a Cummins closed-nozzle fuel injector. The scattering was used to obtain planar images of the liquid-phase fuel distribution. These images show that the leading edge of the liquid-phase portion of the fuel jet reaches a maximum length of 24 mm, which is about half the combustion bowl radius for this engine. Beyond this point virtually all the fuel has vaporized. Soot distribution measurements were made at a high load condition using three imaging diagnostics: natural flame luminosity, 2-D laser-induced incandescence, and 2-D elastic scattering. This investigation showed that the soot distribution in the combusting fuel jet develops through three stages. First, just after the onset of luminous combustion, soot particles are small and nearly uniformly distributed throughout the luminous region of the fuel jet. Second, after about 2 crank angle degrees a pattern develops of a higher soot concentration of larger sized particles in the head vortex region of the jet and a lower soot concentration of smaller sized particles upstream toward the injector. Third, after fuel injection ends, both the soot concentration and soot particle size increase rapidly in the upstream portion of the fuel jet.
Latash, M L
1992-07-01
In the framework of the equilibrium-point hypothesis, virtual trajectories and patterns of joint stiffness were reconstructed during voluntary single-joint oscillatory movements in the elbow joint at a variety of frequencies and against two inertial loads. At low frequencies, virtual trajectories were in-phase with the actual joint trajectories. Joint stiffness changed at a doubled frequency. An increase in movement frequency and/or inertial load led to an increase in the difference between the peaks of the actual and virtual trajectories and in both peak and averaged values of joint stiffness. At a certain, critical frequency, virtual trajectory was nearly flat. Further increase in movement frequency led to a 180 degree phase shift between the actual and virtual trajectories. The assessed values of the natural frequency of the system "limb + manipulandum" were close to the critical frequencies for both low and high inertial loads. Peak levels and integrals of the electromyograms of two flexor and two extensor muscles changed monotonically with movement frequency without any special behavior at the critical frequencies. Nearly flat virtual trajectories at the natural frequency make physical sense as hypothetical control signals, unlike the electromyographic recordings, since a system at its natural frequency requires minimal central interference. Modulation of joint stiffness is assumed to be an important adaptive mechanism attenuating difference between the system's natural frequency and desired movement frequency. Virtual trajectory is considered a behavioral observable. Phase transitions between the virtual and actual trajectories are illustrations of behavioral discontinuities introduced by slow changes in a higher level control parameter, movement frequency. Relative phase shift between these two trajectories may be considered an order parameter.
Image-based 3D reconstruction and virtual environmental walk-through
NASA Astrophysics Data System (ADS)
Sun, Jifeng; Fang, Lixiong; Luo, Ying
2001-09-01
We present a 3D reconstruction method, which combines geometry-based modeling, image-based modeling and rendering techniques. The first component is an interactive geometry modeling method which recovery of the basic geometry of the photographed scene. The second component is model-based stereo algorithm. We discus the image processing problems and algorithms of walking through in virtual space, then designs and implement a high performance multi-thread wandering algorithm. The applications range from architectural planning and archaeological reconstruction to virtual environments and cinematic special effects.
Evaluation of virtual microscopy in medical histology teaching.
Mione, Sylvia; Valcke, Martin; Cornelissen, Maria
2013-01-01
Histology stands as a major discipline in the life science curricula, and the practice of teaching it is based on theoretical didactic strategies along with practical training. Traditionally, students achieve practical competence in this subject by learning optical microscopy. Today, students can use newer information and communication technologies in the study of digital microscopic images. A virtual microscopy program was recently introduced at Ghent University. Since little empirical evidence is available concerning the impact of virtual microscopy (VM) versus optical microscopy (OM) on the acquisition of histology knowledge, this study was set up in the Faculty of Medicine and Health Sciences. A pretest-post test and cross-over design was adopted. In the first phase, the experiment yielded two groups in a total population of 199 students, Group 1 performing the practical sessions with OM versus Group 2 performing the same sessions with VM. In the second phase, the research subjects switched conditions. The prior knowledge level of all research subjects was assessed with a pretest. Knowledge acquisition was measured with a post test after each phase (T1 and T2). Analysis of covariance was carried out to study the differential gain in knowledge at T1 and T2, considering the possible differences in prior knowledge at the start of the study. The results pointed to non-significant differences at T1 and at T2. This supports the assumption that the acquisition of the histology knowledge is independent of the microscopy representation mode (VM versus OM) of the learning material. The conclusion that VM is equivalent to OM offers new directions in view of ongoing innovations in medical education technology. Copyright © 2013 American Association of Anatomists.
[Whole slide imaging technology: from digitization to online applications].
Ameisen, David; Le Naour, Gilles; Daniel, Christel
2012-11-01
As e-health becomes essential to modern care, whole slide images (virtual slides) are now an important clinical, teaching and research tool in pathology. Virtual microscopy consists of digitizing a glass slide by acquiring hundreds of tiles of regions of interest at different zoom levels and assembling them into a structured file. This gigapixel image can then be remotely viewed over a terminal, exactly the way pathologists use a microscope. In this article, we will first describe the key elements of this technology, from the acquisition, using a scanner or a motorized microscope, to the broadcasting of virtual slides through a local or distant viewer over an intranet or Internet connection. As virtual slides are now commonly used in virtual classrooms, clinical data and research databases, we will highlight the main issues regarding its uses in modern pathology. Emphasis will be made on quality assurance policies, standardization and scaling. © 2012 médecine/sciences – Inserm / SRMS.
NASA Technical Reports Server (NTRS)
Randle, R. J.; Roscoe, S. N.; Petitt, J. C.
1980-01-01
Twenty professional pilots observed a computer-generated airport scene during simulated autopilot-coupled night landing approaches and at two points (20 sec and 10 sec before touchdown) judged whether the airplane would undershoot or overshoot the aimpoint. Visual accommodation was continuously measured using an automatic infrared optometer. Experimental variables included approach slope angle, display magnification, visual focus demand (using ophthalmic lenses), and presentation of the display as either a real (direct view) or a virtual (collimated) image. Aimpoint judgments shifted predictably with actual approach slope and display magnification. Both pilot judgments and measured accommodation interacted with focus demand with real-image displays but not with virtual-image displays. With either type of display, measured accommodation lagged far behind focus demand and was reliably less responsive to the virtual images. Pilot judgments shifted dramatically from an overwhelming perceived-overshoot bias 20 sec before touchdown to a reliable undershoot bias 10 sec later.
Bang, Yo-Soon; Son, Kyung Hyun; Kim, Hyun Jin
2016-11-01
[Purpose] The purpose of this study is to investigate the effects of virtual reality training using Nintendo Wii on balance and walking for stroke patients. [Subjects and Methods] Forty stroke patients with stroke were randomly divided into two exercise program groups: virtual reality training (n=20) and treadmill (n=20). The subjects underwent their 40-minute exercise program three times a week for eight weeks. Their balance and walking were measured before and after the complete program. We measured the left/right weight-bearing and the anterior/posterior weight-bearing for balance, as well as stance phase, swing phase, and cadence for walking. [Results] For balance, both groups showed significant differences in the left/right and anterior/posterior weight-bearing, with significant post-program differences between the groups. For walking, there were significant differences in the stance phase, swing phase, and cadence of the virtual reality training group. [Conclusion] The results of this study suggest that virtual reality training providing visual feedback may enable stroke patients to directly adjust their incorrect weight center and shift visually. Virtual reality training may be appropriate for patients who need improved balance and walking ability by inducing their interest for them to perform planned exercises on a consistent basis.
Bang, Yo-Soon; Son, Kyung Hyun; Kim, Hyun Jin
2016-01-01
[Purpose] The purpose of this study is to investigate the effects of virtual reality training using Nintendo Wii on balance and walking for stroke patients. [Subjects and Methods] Forty stroke patients with stroke were randomly divided into two exercise program groups: virtual reality training (n=20) and treadmill (n=20). The subjects underwent their 40-minute exercise program three times a week for eight weeks. Their balance and walking were measured before and after the complete program. We measured the left/right weight-bearing and the anterior/posterior weight-bearing for balance, as well as stance phase, swing phase, and cadence for walking. [Results] For balance, both groups showed significant differences in the left/right and anterior/posterior weight-bearing, with significant post-program differences between the groups. For walking, there were significant differences in the stance phase, swing phase, and cadence of the virtual reality training group. [Conclusion] The results of this study suggest that virtual reality training providing visual feedback may enable stroke patients to directly adjust their incorrect weight center and shift visually. Virtual reality training may be appropriate for patients who need improved balance and walking ability by inducing their interest for them to perform planned exercises on a consistent basis. PMID:27942130
Colman, Kerri L; Dobbe, Johannes G G; Stull, Kyra E; Ruijter, Jan M; Oostra, Roelof-Jan; van Rijn, Rick R; van der Merwe, Alie E; de Boer, Hans H; Streekstra, Geert J
2017-07-01
Almost all European countries lack contemporary skeletal collections for the development and validation of forensic anthropological methods. Furthermore, legal, ethical and practical considerations hinder the development of skeletal collections. A virtual skeletal database derived from clinical computed tomography (CT) scans provides a potential solution. However, clinical CT scans are typically generated with varying settings. This study investigates the effects of image segmentation and varying imaging conditions on the precision of virtual modelled pelves. An adult human cadaver was scanned using varying imaging conditions, such as scanner type and standard patient scanning protocol, slice thickness and exposure level. The pelvis was segmented from the various CT images resulting in virtually modelled pelves. The precision of the virtual modelling was determined per polygon mesh point. The fraction of mesh points resulting in point-to-point distance variations of 2 mm or less (95% confidence interval (CI)) was reported. Colour mapping was used to visualise modelling variability. At almost all (>97%) locations across the pelvis, the point-to-point distance variation is less than 2 mm (CI = 95%). In >91% of the locations, the point-to-point distance variation was less than 1 mm (CI = 95%). This indicates that the geometric variability of the virtual pelvis as a result of segmentation and imaging conditions rarely exceeds the generally accepted linear error of 2 mm. Colour mapping shows that areas with large variability are predominantly joint surfaces. Therefore, results indicate that segmented bone elements from patient-derived CT scans are a sufficiently precise source for creating a virtual skeletal database.
ERIC Educational Resources Information Center
Cody, Jeremy A.; Craig, Paul A.; Loudermilk, Adam D.; Yacci, Paul M.; Frisco, Sarah L.; Milillo, Jennifer R.
2012-01-01
A novel stereochemistry lesson was prepared that incorporated both handheld molecular models and embedded virtual three-dimensional (3D) images. The images are fully interactive and eye-catching for the students; methods for preparing 3D molecular images in Adobe Acrobat are included. The lesson was designed and implemented to showcase the 3D…
NASA Astrophysics Data System (ADS)
Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry
2016-03-01
In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.
Hawryluk, A.M.; Ceglio, N.M.
1993-01-12
Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.
Hawryluk, Andrew M.; Ceglio, Natale M.
1993-01-01
Virtually distortion free large field high resolution imaging is performed using an imaging system which contains large field distortion or field curvature. A reticle is imaged in one direction through the optical system to form an encoded mask. The encoded mask is then imaged back through the imaging system onto a wafer positioned at the reticle position. Particle beams, including electrons, ions and neutral particles, may be used as well as electromagnetic radiation.
Fiorelli, Alfonso; Raucci, Antonio; Cascone, Roberto; Reginelli, Alfonso; Di Natale, Davide; Santoriello, Carlo; Capuozzo, Antonio; Grassi, Roberto; Serra, Nicola; Polverino, Mario; Santini, Mario
2017-04-01
We proposed a new virtual bronchoscopy tool to improve the accuracy of traditional transbronchial needle aspiration for mediastinal staging. Chest-computed tomographic images (1 mm thickness) were reconstructed with Osirix software to produce a virtual bronchoscopic simulation. The target adenopathy was identified by measuring its distance from the carina on multiplanar reconstruction images. The static images were uploaded in iMovie Software, which produced a virtual bronchoscopic movie from the images; the movie was then transferred to a tablet computer to provide real-time guidance during a biopsy. To test the validity of our tool, we divided all consecutive patients undergoing transbronchial needle aspiration retrospectively in two groups based on whether the biopsy was guided by virtual bronchoscopy (virtual bronchoscopy group) or not (traditional group). The intergroup diagnostic yields were statistically compared. Our analysis included 53 patients in the traditional and 53 in the virtual bronchoscopy group. The sensitivity, specificity, positive predictive value, negative predictive value and diagnostic accuracy for the traditional group were 66.6%, 100%, 100%, 10.53% and 67.92%, respectively, and for the virtual bronchoscopy group were 84.31%, 100%, 100%, 20% and 84.91%, respectively. The sensitivity ( P = 0.011) and diagnostic accuracy ( P = 0.011) of sampling the paratracheal station were better for the virtual bronchoscopy group than for the traditional group; no significant differences were found for the subcarinal lymph node. Our tool is simple, economic and available in all centres. It guided in real time the needle insertion, thereby improving the accuracy of traditional transbronchial needle aspiration, especially when target lesions are located in a difficult site like the paratracheal station. © The Author 2016. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
TU-A-17A-02: In Memoriam of Ben Galkin: Virtual Tools for Validation of X-Ray Breast Imaging Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myers, K; Bakic, P; Abbey, C
2014-06-15
This symposium will explore simulation methods for the preclinical evaluation of novel 3D and 4D x-ray breast imaging systems – the subject of AAPM taskgroup TG234. Given the complex design of modern imaging systems, simulations offer significant advantages over long and costly clinical studies in terms of reproducibility, reduced radiation exposures, a known reference standard, and the capability for studying patient and disease subpopulations through appropriate choice of simulation parameters. Our focus will be on testing the realism of software anthropomorphic phantoms and virtual clinical trials tools developed for the optimization and validation of breast imaging systems. The symposium willmore » review the stateof- the-science, as well as the advantages and limitations of various approaches to testing realism of phantoms and simulated breast images. Approaches based upon the visual assessment of synthetic breast images by expert observers will be contrasted with approaches based upon comparing statistical properties between synthetic and clinical images. The role of observer models in the assessment of realism will be considered. Finally, an industry perspective will be presented, summarizing the role and importance of virtual tools and simulation methods in product development. The challenges and conditions that must be satisfied in order for computational modeling and simulation to play a significantly increased role in the design and evaluation of novel breast imaging systems will be addressed. Learning Objectives: Review the state-of-the science in testing realism of software anthropomorphic phantoms and virtual clinical trials tools; Compare approaches based upon the visual assessment by expert observers vs. the analysis of statistical properties of synthetic images; Discuss the role of observer models in the assessment of realism; Summarize the industry perspective to virtual methods for breast imaging.« less
A digital atlas of breast histopathology: an application of web based virtual microscopy
Lundin, M; Lundin, J; Helin, H; Isola, J
2004-01-01
Aims: To develop an educationally useful atlas of breast histopathology, using advanced web based virtual microscopy technology. Methods: By using a robotic microscope and software adopted and modified from the aerial and satellite imaging industry, a virtual microscopy system was developed that allows fully automated slide scanning and image distribution via the internet. More than 150 slides were scanned at high resolution with an oil immersion ×40 objective (numerical aperture, 1.3) and archived on an image server residing in a high speed university network. Results: A publicly available website was constructed, http://www.webmicroscope.net/breastatlas, which features a comprehensive virtual slide atlas of breast histopathology according to the World Health Organisation 2003 classification. Users can view any part of an entire specimen at any magnification within a standard web browser. The virtual slides are supplemented with concise textual descriptions, but can also be viewed without diagnostic information for self assessment of histopathology skills. Conclusions: Using the technology described here, it is feasible to develop clinically and educationally useful virtual microscopy applications. Web based virtual microscopy will probably become widely used at all levels in pathology teaching. PMID:15563669
System and method for progressive band selection for hyperspectral images
NASA Technical Reports Server (NTRS)
Fisher, Kevin (Inventor)
2013-01-01
Disclosed herein are systems, methods, and non-transitory computer-readable storage media for progressive band selection for hyperspectral images. A system having module configured to control a processor to practice the method calculates a virtual dimensionality of a hyperspectral image having multiple bands to determine a quantity Q of how many bands are needed for a threshold level of information, ranks each band based on a statistical measure, selects Q bands from the multiple bands to generate a subset of bands based on the virtual dimensionality, and generates a reduced image based on the subset of bands. This approach can create reduced datasets of full hyperspectral images tailored for individual applications. The system uses a metric specific to a target application to rank the image bands, and then selects the most useful bands. The number of bands selected can be specified manually or calculated from the hyperspectral image's virtual dimensionality.
A three phase optimization method for precopy based VM live migration.
Sharma, Sangeeta; Chawla, Meenu
2016-01-01
Virtual machine live migration is a method of moving virtual machine across hosts within a virtualized datacenter. It provides significant benefits for administrator to manage datacenter efficiently. It reduces service interruption by transferring the virtual machine without stopping at source. Transfer of large number of virtual machine memory pages results in long migration time as well as downtime, which also affects the overall system performance. This situation becomes unbearable when migration takes place over slower network or a long distance migration within a cloud. In this paper, precopy based virtual machine live migration method is thoroughly analyzed to trace out the issues responsible for its performance drops. In order to address these issues, this paper proposes three phase optimization (TPO) method. It works in three phases as follows: (i) reduce the transfer of memory pages in first phase, (ii) reduce the transfer of duplicate pages by classifying frequently and non-frequently updated pages, and (iii) reduce the data sent in last iteration of migration by applying the simple RLE compression technique. As a result, each phase significantly reduces total pages transferred, total migration time and downtime respectively. The proposed TPO method is evaluated using different representative workloads on a Xen virtualized environment. Experimental results show that TPO method reduces total pages transferred by 71 %, total migration time by 70 %, downtime by 3 % for higher workload, and it does not impose significant overhead as compared to traditional precopy method. Comparison of TPO method with other methods is also done for supporting and showing its effectiveness. TPO method and precopy methods are also tested at different number of iterations. The TPO method gives better performance even with less number of iterations.
Observation of a pretransitional effect near a virtual smectic-A--smectic-C* transition.
Shibahara, S; Takanishi, Y; Yamamoto, J; Ogasawara, T; Ishikawa, K; Yokoyama, H; Takezoe, H
2001-06-01
Unusual softening of the layer compression modulus B has been observed near the phase boundary where the smectic-C* phase vanishes in a naphtalene-based liquid crystal mixture. From the systematic study of x-ray and layer compression measurements, this unusual effect is attributed to the pretransitional softening near a virtual smectic-A-smectic-C* phase transition in the smectic-A phase, which no longer appears on the thermoequilibrium phase diagram. This phenomenon is similar but not equivalent to supercritical behavior.
Pukala, Jason; Meeks, Sanford L; Staton, Robert J; Bova, Frank J; Mañon, Rafael R; Langen, Katja M
2013-11-01
Deformable image registration (DIR) is being used increasingly in various clinical applications. However, the underlying uncertainties of DIR are not well-understood and a comprehensive methodology has not been developed for assessing a range of interfraction anatomic changes during head and neck cancer radiotherapy. This study describes the development of a library of clinically relevant virtual phantoms for the purpose of aiding clinicians in the QA of DIR software. These phantoms will also be available to the community for the independent study and comparison of other DIR algorithms and processes. Each phantom was derived from a pair of kVCT volumetric image sets. The first images were acquired of head and neck cancer patients prior to the start-of-treatment and the second were acquired near the end-of-treatment. A research algorithm was used to autosegment and deform the start-of-treatment (SOT) images according to a biomechanical model. This algorithm allowed the user to adjust the head position, mandible position, and weight loss in the neck region of the SOT images to resemble the end-of-treatment (EOT) images. A human-guided thin-plate splines algorithm was then used to iteratively apply further deformations to the images with the objective of matching the EOT anatomy as closely as possible. The deformations from each algorithm were combined into a single deformation vector field (DVF) and a simulated end-of-treatment (SEOT) image dataset was generated from that DVF. Artificial noise was added to the SEOT images and these images, along with the original SOT images, created a virtual phantom where the underlying "ground-truth" DVF is known. Images from ten patients were deformed in this fashion to create ten clinically relevant virtual phantoms. The virtual phantoms were evaluated to identify unrealistic DVFs using the normalized cross correlation (NCC) and the determinant of the Jacobian matrix. A commercial deformation algorithm was applied to the virtual phantoms to show how they may be used to generate estimates of DIR uncertainty. The NCC showed that the simulated phantom images had greater similarity to the actual EOT images than the images from which they were derived, supporting the clinical relevance of the synthetic deformation maps. Calculation of the Jacobian of the "ground-truth" DVFs resulted in only positive values. As an example, mean error statistics are presented for all phantoms for the brainstem, cord, mandible, left parotid, and right parotid. It is essential that DIR algorithms be evaluated using a range of possible clinical scenarios for each treatment site. This work introduces a library of virtual phantoms intended to resemble real cases for interfraction head and neck DIR that may be used to estimate and compare the uncertainty of any DIR algorithm.
Virtual Reality as an Educational and Training Tool for Medicine.
Izard, Santiago González; Juanes, Juan A; García Peñalvo, Francisco J; Estella, Jesús Mª Gonçalvez; Ledesma, Mª José Sánchez; Ruisoto, Pablo
2018-02-01
Until very recently, we considered Virtual Reality as something that was very close, but it was still science fiction. However, today Virtual Reality is being integrated into many different areas of our lives, from videogames to different industrial use cases and, of course, it is starting to be used in medicine. There are two great general classifications for Virtual Reality. Firstly, we find a Virtual Reality in which we visualize a world completely created by computer, three-dimensional and where we can appreciate that the world we are visualizing is not real, at least for the moment as rendered images are improving very fast. Secondly, there is a Virtual Reality that basically consists of a reflection of our reality. This type of Virtual Reality is created using spherical or 360 images and videos, so we lose three-dimensional visualization capacity (until the 3D cameras are more developed), but on the other hand we gain in terms of realism in the images. We could also mention a third classification that merges the previous two, where virtual elements created by computer coexist with 360 images and videos. In this article we will show two systems that we have developed where each of them can be framed within one of the previous classifications, identifying the technologies used for their implementation as well as the advantages of each one. We will also analize how these systems can improve the current methodologies used for medical training. The implications of these developments as tools for teaching, learning and training are discussed.
NASA Astrophysics Data System (ADS)
Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.
2017-09-01
Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.
Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander W.; Mavris, Dimitri N.
2014-01-01
A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-ofattack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.
Virtual Deformation Control of the X-56A Model with Simulated Fiber Optic Sensors
NASA Technical Reports Server (NTRS)
Suh, Peter M.; Chin, Alexander Wong
2013-01-01
A robust control law design methodology is presented to stabilize the X-56A model and command its wing shape. The X-56A was purposely designed to experience flutter modes in its flight envelope. The methodology introduces three phases: the controller design phase, the modal filter design phase, and the reference signal design phase. A mu-optimal controller is designed and made robust to speed and parameter variations. A conversion technique is presented for generating sensor strain modes from sensor deformation mode shapes. The sensor modes are utilized for modal filtering and simulating fiber optic sensors for feedback to the controller. To generate appropriate virtual deformation reference signals, rigid-body corrections are introduced to the deformation mode shapes. After successful completion of the phases, virtual deformation control is demonstrated. The wing is deformed and it is shown that angle-of-attack changes occur which could potentially be used to an advantage. The X-56A program must demonstrate active flutter suppression. It is shown that the virtual deformation controller can achieve active flutter suppression on the X-56A simulation model.
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-09-01
Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE.
Deficient gaze pattern during virtual multiparty conversation in patients with schizophrenia.
Han, Kiwan; Shin, Jungeun; Yoon, Sang Young; Jang, Dong-Pyo; Kim, Jae-Jin
2014-06-01
Virtual reality has been used to measure abnormal social characteristics, particularly in one-to-one situations. In real life, however, conversations with multiple companions are common and more complicated than two-party conversations. In this study, we explored the features of social behaviors in patients with schizophrenia during virtual multiparty conversations. Twenty-three patients with schizophrenia and 22 healthy controls performed the virtual three-party conversation task, which included leading and aiding avatars, positive- and negative-emotion-laden situations, and listening and speaking phases. Patients showed a significant negative correlation in the listening phase between the amount of gaze on the between-avatar space and reasoning ability, and demonstrated increased gaze on the between-avatar space in the speaking phase that was uncorrelated with attentional ability. These results suggest that patients with schizophrenia have active avoidance of eye contact during three-party conversations. Virtual reality may provide a useful way to measure abnormal social characteristics during multiparty conversations in schizophrenia. Copyright © 2014 Elsevier Ltd. All rights reserved.
Heuts, Samuel; Sardari Nia, Peyman; Maessen, Jos G
2016-01-01
For the past decades, surgeries have become more complex, due to the increasing age of the patient population referred for thoracic surgery, more complex pathology and the emergence of minimally invasive thoracic surgery. Together with the early detection of thoracic disease as a result of innovations in diagnostic possibilities and the paradigm shift to personalized medicine, preoperative planning is becoming an indispensable and crucial aspect of surgery. Several new techniques facilitating this paradigm shift have emerged. Pre-operative marking and staining of lesions are already a widely accepted method of preoperative planning in thoracic surgery. However, three-dimensional (3D) image reconstructions, virtual simulation and rapid prototyping (RP) are still in development phase. These new techniques are expected to become an important part of the standard work-up of patients undergoing thoracic surgery in the future. This review aims at graphically presenting and summarizing these new diagnostic and therapeutic tools.
Santos, Rodrigo Mologni Gonçalves Dos; De Martino, José Mario; Passeri, Luis Augusto; Attux, Romis Ribeiro de Faissol; Haiter Neto, Francisco
2017-09-01
To develop a computer-based method for automating the repositioning of jaw segments in the skull during three-dimensional virtual treatment planning of orthognathic surgery. The method speeds up the planning phase of the orthognathic procedure, releasing surgeons from laborious and time-consuming tasks. The method finds the optimal positions for the maxilla, mandibular body, and bony chin in the skull. Minimization of cephalometric differences between measured and standard values is considered. Cone-beam computed tomographic images acquired from four preoperative patients with skeletal malocclusion were used for evaluating the method. Dentofacial problems of the four patients were rectified, including skeletal malocclusion, facial asymmetry, and jaw discrepancies. The results show that the method is potentially able to be used in routine clinical practice as support for treatment-planning decisions in orthognathic surgery. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Fully Three-Dimensional Virtual-Reality System
NASA Technical Reports Server (NTRS)
Beckman, Brian C.
1994-01-01
Proposed virtual-reality system presents visual displays to simulate free flight in three-dimensional space. System, virtual space pod, is testbed for control and navigation schemes. Unlike most virtual-reality systems, virtual space pod would not depend for orientation on ground plane, which hinders free flight in three dimensions. Space pod provides comfortable seating, convenient controls, and dynamic virtual-space images for virtual traveler. Controls include buttons plus joysticks with six degrees of freedom.
NASA Astrophysics Data System (ADS)
Ding, Yea-Chung
2010-11-01
In recent years national parks worldwide have introduced online virtual tourism, through which potential visitors can search for tourist information. Most virtual tourism websites are a simulation of an existing location, usually composed of panoramic images, a sequence of hyperlinked still or video images, and/or virtual models of the actual location. As opposed to actual tourism, a virtual tour is typically accessed on a personal computer or an interactive kiosk. Using modern Digital Earth techniques such as high resolution satellite images, precise GPS coordinates and powerful 3D WebGIS, however, it's possible to create more realistic scenic models to present natural terrain and man-made constructions in greater detail. This article explains how to create an online scientific reality tourist guide for the Jinguashi Gold Ecological Park at Jinguashi in northern Taiwan, China. This project uses high-resolution Formosat 2 satellite images and digital aerial images in conjunction with DTM to create a highly realistic simulation of terrain, with the addition of 3DMAX to add man-made constructions and vegetation. Using this 3D Geodatabase model in conjunction with INET 3D WebGIS software, we have found Digital Earth concept can greatly improve and expand the presentation of traditional online virtual tours on the websites.
NASA Technical Reports Server (NTRS)
Sutliff, Daniel L.; Dougherty, Robert P.; Walker, Bruce E.
2010-01-01
An in-duct beamforming technique for imaging rotating broadband fan sources has been used to evaluate the acoustic characteristics of a Foam-Metal Liner installed over-the-rotor of a low-speed fan. The NASA Glenn Research Center s Advanced Noise Control Fan was used as a test bed. A duct wall-mounted phased array consisting of several rings of microphones was employed. The data are mathematically resampled in the fan rotating reference frame and subsequently used in a conventional beamforming technique. The steering vectors for the beamforming technique are derived from annular duct modes, so that effects of reflections from the duct walls are reduced.
... be detected by optical colonoscopy. Virtual colonoscopy uses virtual reality technology to produce three-dimensional images of the colon and rectum. However, the costs and benefits of virtual colonoscopy are still being investigated, and the technique ...
Narita, Akihiro; Ohkubo, Masaki; Murao, Kohei; Matsumoto, Toru; Wada, Shinichi
2017-10-01
The aim of this feasibility study using phantoms was to propose a novel method for obtaining computer-generated realistic virtual nodules in lung computed tomography (CT). In the proposed methodology, pulmonary nodule images obtained with a CT scanner are deconvolved with the point spread function (PSF) in the scan plane and slice sensitivity profile (SSP) measured for the scanner; the resultant images are referred to as nodule-like object functions. Next, by convolving the nodule-like object function with the PSF and SSP of another (target) scanner, the virtual nodule can be generated so that it has the characteristics of the spatial resolution of the target scanner. To validate the methodology, the authors applied physical nodules of 5-, 7- and 10-mm-diameter (uniform spheres) included in a commercial CT test phantom. The nodule-like object functions were calculated from the sphere images obtained with two scanners (Scanner A and Scanner B); these functions were referred to as nodule-like object functions A and B, respectively. From these, virtual nodules were generated based on the spatial resolution of another scanner (Scanner C). By investigating the agreement of the virtual nodules generated from the nodule-like object functions A and B, the equivalence of the nodule-like object functions obtained from different scanners could be assessed. In addition, these virtual nodules were compared with the real (true) sphere images obtained with Scanner C. As a practical validation, five types of laboratory-made physical nodules with various complicated shapes and heterogeneous densities, similar to real lesions, were used. The nodule-like object functions were calculated from the images of these laboratory-made nodules obtained with Scanner A. From them, virtual nodules were generated based on the spatial resolution of Scanner C and compared with the real images of laboratory-made nodules obtained with Scanner C. Good agreement of the virtual nodules generated from the nodule-like object functions A and B of the phantom spheres was found, suggesting the validity of the nodule-like object functions. The virtual nodules generated from the nodule-like object function A of the phantom spheres were similar to the real images obtained with Scanner C; the root mean square errors (RMSEs) between them were 10.8, 11.1, and 12.5 Hounsfield units (HU) for 5-, 7-, and 10-mm-diameter spheres, respectively. The equivalent results (RMSEs) using the nodule-like object function B were 15.9, 16.8, and 16.5 HU, respectively. These RMSEs were small considering the high contrast between the sphere density and background density (approximately 674 HU). The virtual nodules generated from the nodule-like object functions of the five laboratory-made nodules were similar to the real images obtained with Scanner C; the RMSEs between them ranged from 6.2 to 8.6 HU in five cases. The nodule-like object functions calculated from real nodule images would be effective to generate realistic virtual nodules. The proposed method would be feasible for generating virtual nodules that have the characteristics of the spatial resolution of the CT system used in each institution, allowing for site-specific nodule generation. © 2017 American Association of Physicists in Medicine.
The Development of Virtual Laboratory Using ICT for Physics in Senior High School
NASA Astrophysics Data System (ADS)
Masril, M.; Hidayati, H.; Darvina, Y.
2018-04-01
One of the problems found in the implementation of the curriculum in 2013 is not all competency skills can be performed well. Therefore, to overcome these problems, virtual laboratory designed to improve the mastery of concepts of physics. One of the design objectives virtual laboratories is to improve the quality of education and learning in physics in high school. The method used in this study is a research method development four D model with the definition phase, design phase, development phase, and dissemination phase. Research has reached the stage of development and has been tested valid specialist. The instrument used in the research is a questionnaire consisting of: 1) the material substance; 2) The display of visual communication; 3) instructional design; 4) the use of software; and 5) Linguistic. The research results is validity in general has been a very good category (85.6), so that the design of virtual labs designed can already be used in high school.
NASA Astrophysics Data System (ADS)
Zhu, Y.; Jin, S.; Tian, Y.; Wang, M.
2017-09-01
To meet the requirement of high accuracy and high speed processing for wide swath high resolution optical satellite imagery under emergency situation in both ground processing system and on-board processing system. This paper proposed a ROI-orientated sensor correction algorithm based on virtual steady reimaging model for wide swath high resolution optical satellite imagery. Firstly, the imaging time and spatial window of the ROI is determined by a dynamic search method. Then, the dynamic ROI sensor correction model based on virtual steady reimaging model is constructed. Finally, the corrected image corresponding to the ROI is generated based on the coordinates mapping relationship which is established by the dynamic sensor correction model for corrected image and rigours imaging model for original image. Two experimental results show that the image registration between panchromatic and multispectral images can be well achieved and the image distortion caused by satellite jitter can be also corrected efficiently.
Virtual endoscopic imaging of the spine.
Kotani, Toshiaki; Nagaya, Shigeyuki; Sonoda, Masaru; Akazawa, Tsutomu; Lumawig, Jose Miguel T; Nemoto, Tetsuharu; Koshi, Takana; Kamiya, Koshiro; Hirosawa, Naoya; Minami, Shohei
2012-05-20
Prospective trial of virtual endoscopy in spinal surgery. To investigate the utility of virtual endoscopy of the spine in conjunction with spinal surgery. Several studies have described clinical applications of virtual endoscopy to visualize the inside of the bronchi, paranasal sinus, stomach, small intestine, pancreatic duct, and bile duct, but, to date, no study has described the use of virtual endoscopy in the spine. Virtual endoscopy is a realistic 3-dimensional intraluminal simulation of tubular structures that is generated by postprocessing of computed tomographic data sets. Five patients with spinal disease were selected: 2 patients with degenerative disease, 2 patients with spinal deformity, and 1 patient with spinal injury. Virtual endoscopy software allows an observer to explore the spinal canal with a mouse, using multislice computed tomographic data. Our study found that virtual endoscopy of the spine has advantages compared with standard imaging methods because surgeons can noninvasively explore the spinal canal in all directions. Virtual endoscopy of the spine may be useful to surgeons for diagnosis, preoperative planning, and postoperative assessment by obviating the need to mentally construct a 3-dimensional picture of the spinal canal from 2-dimensional computed tomographic scans.
Method and Apparatus for Virtual Interactive Medical Imaging by Multiple Remotely-Located Users
NASA Technical Reports Server (NTRS)
Ross, Muriel D. (Inventor); Twombly, Ian Alexander (Inventor); Senger, Steven O. (Inventor)
2003-01-01
A virtual interactive imaging system allows the displaying of high-resolution, three-dimensional images of medical data to a user and allows the user to manipulate the images, including rotation of images in any of various axes. The system includes a mesh component that generates a mesh to represent a surface of an anatomical object, based on a set of data of the object, such as from a CT or MRI scan or the like. The mesh is generated so as to avoid tears, or holes, in the mesh, providing very high-quality representations of topographical features of the object, particularly at high- resolution. The system further includes a virtual surgical cutting tool that enables the user to simulate the removal of a piece or layer of a displayed object, such as a piece of skin or bone, view the interior of the object, manipulate the removed piece, and reattach the removed piece if desired. The system further includes a virtual collaborative clinic component, which allows the users of multiple, remotely-located computer systems to collaboratively and simultaneously view and manipulate the high-resolution, three-dimensional images of the object in real-time.
Ferrer-García, Marta; Gutiérrez-Maldonado, José
2012-01-01
This article reviews research into the use of virtual reality in the study, assessment, and treatment of body image disturbances in eating disorders and nonclinical samples. During the last decade, virtual reality has emerged as a technology that is especially suitable not only for the assessment of body image disturbances but also for its treatment. Indeed, several virtual environment-based software systems have been developed for this purpose. Furthermore, virtual reality seems to be a good alternative to guided imagery and in vivo exposure, and is therefore very useful for studies that require exposure to life-like situations but which are difficult to conduct in the real world. Nevertheless, review highlights the lack of published controlled studies and the presence of methodological drawbacks that should be considered in future studies. This article also discusses the implications of the results obtained and proposes directions for future research. Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Roscoe, Stanley N.
1989-01-01
For better or worse, virtual imaging displays are with us in the form of narrow-angle combining-glass presentations, head-up displays (HUD), and head-mounted projections of wide-angle sensor-generated or computer-animated imagery (HMD). All military and civil aviation services and a large number of aerospace companies are involved in one way or another in a frantic competition to develop the best virtual imaging display system. The success or failure of major weapon systems hangs in the balance, and billions of dollars in potential business are at stake. Because of the degree to which national defense is committed to the perfection of virtual imaging displays, a brief consideration of their status, an investigation and analysis of their problems, and a search for realistic alternatives are long overdue.
Mirror-image-induced magnetic modes.
Xifré-Pérez, Elisabet; Shi, Lei; Tuzer, Umut; Fenollosa, Roberto; Ramiro-Manzano, Fernando; Quidant, Romain; Meseguer, Francisco
2013-01-22
Reflection in a mirror changes the handedness of the real world, and right-handed objects turn left-handed and vice versa (M. Gardner, The Ambidextrous Universe, Penguin Books, 1964). Also, we learn from electromagnetism textbooks that a flat metallic mirror transforms an electric charge into a virtual opposite charge. Consequently, the mirror image of a magnet is another parallel virtual magnet as the mirror image changes both the charge sign and the curl handedness. Here we report the dramatic modification in the optical response of a silicon nanocavity induced by the interaction with its image through a flat metallic mirror. The system of real and virtual dipoles can be interpreted as an effective magnetic dipole responsible for a strong enhancement of the cavity scattering cross section.
Evaluation of three-dimensional virtual perception of garments
NASA Astrophysics Data System (ADS)
Aydoğdu, G.; Yeşilpinar, S.; Erdem, D.
2017-10-01
In recent years, three-dimensional design, dressing and simulation programs came into prominence in the textile industry. By these programs, the need to produce clothing samples for every design in design process has been eliminated. Clothing fit, design, pattern, fabric and accessory details and fabric drape features can be evaluated easily. Also, body size of virtual mannequin can be adjusted so more realistic simulations can be created. Moreover, three-dimensional virtual garment images created by these programs can be used while presenting the product to end-user instead of two-dimensional photograph images. In this study, a survey was carried out to investigate the visual perception of consumers. The survey was conducted for three different garment types, separately. Questions about gender, profession etc. was asked to the participants and expected them to compare real samples and artworks or three-dimensional virtual images of garments. When survey results were analyzed statistically, it is seen that demographic situation of participants does not affect visual perception and three-dimensional virtual garment images reflect the real sample characteristics better than artworks for each garment type. Also, it is reported that there is no perception difference depending on garment type between t-shirt, sweatshirt and tracksuit bottom.
NASA Technical Reports Server (NTRS)
Hall, Brendan (Inventor); Bonk, Ted (Inventor); Varadarajan, Srivatsan (Inventor); Smithgall, William Todd (Inventor); DeLay, Benjamin F. (Inventor)
2017-01-01
Systems and methods for systematic hybrid network scheduling for multiple traffic classes with host timing and phase constraints are provided. In certain embodiments, a method of scheduling communications in a network comprises scheduling transmission of virtual links pertaining to a first traffic class on a global schedule to coordinate transmission of the virtual links pertaining to the first traffic class across all transmitting end stations on the global schedule; and scheduling transmission of each virtual link pertaining to a second traffic class on a local schedule of the respective transmitting end station from which each respective virtual link pertaining to the second traffic class is transmitted such that transmission of each virtual link pertaining to the second traffic class is coordinated only at the respective end station from which each respective virtual link pertaining to the second traffic class is transmitted.
Functional imaging of hippocampal place cells at cellular resolution during virtual navigation
Dombeck, Daniel A.; Harvey, Christopher D.; Tian, Lin; Looger, Loren L.; Tank, David W.
2010-01-01
Spatial navigation is a widely employed behavior in rodent studies of neuronal circuits underlying cognition, learning and memory. In vivo microscopy combined with genetically-encoded indicators provides important new tools to study neuronal circuits, but has been technically difficult to apply during navigation. We describe methods to image the activity of hippocampal CA1 neurons with sub-cellular resolution in behaving mice. Neurons expressing the genetically encoded calcium indicator GCaMP3 were imaged through a chronic hippocampal window. Head-fixed mice performed spatial behaviors within a setup combining a virtual reality system and a custom built two-photon microscope. Populations of place cells were optically identified, and the correlation between the location of their place fields in the virtual environment and their anatomical location in the local circuit was measured. The combination of virtual reality and high-resolution functional imaging should allow for a new generation of studies to probe neuronal circuit dynamics during behavior. PMID:20890294
Qian, Zeng-Hui; Feng, Xu; Li, Yang; Tang, Ke
2018-01-01
Studying the three-dimensional (3D) anatomy of the cavernous sinus is essential for treating lesions in this region with skull base surgeries. Cadaver dissection is a conventional method that has insurmountable flaws with regard to understanding spatial anatomy. The authors' research aimed to build an image model of the cavernous sinus region in a virtual reality system to precisely, individually and objectively elucidate the complete and local stereo-anatomy. Computed tomography and magnetic resonance imaging scans were performed on 5 adult cadaver heads. Latex mixed with contrast agent was injected into the arterial system and then into the venous system. Computed tomography scans were performed again following the 2 injections. Magnetic resonance imaging scans were performed again after the cranial nerves were exposed. Image data were input into a virtual reality system to establish a model of the cavernous sinus. Observation results of the image models were compared with those of the cadaver heads. Visualization of the cavernous sinus region models built using the virtual reality system was good for all the cadavers. High resolutions were achieved for the images of different tissues. The observed results were consistent with those of the cadaver head. The spatial architecture and modality of the cavernous sinus were clearly displayed in the 3D model by rotating the model and conveniently changing its transparency. A 3D virtual reality model of the cavernous sinus region is helpful for globally and objectively understanding anatomy. The observation procedure was accurate, convenient, noninvasive, and time and specimen saving.
ConfocalVR: Immersive Visualization Applied to Confocal Microscopy.
Stefani, Caroline; Lacy-Hulbert, Adam; Skillman, Thomas
2018-06-24
ConfocalVR is a virtual reality (VR) application created to improve the ability of researchers to study the complexity of cell architecture. Confocal microscopes take pictures of fluorescently labeled proteins or molecules at different focal planes to create a stack of 2D images throughout the specimen. Current software applications reconstruct the 3D image and render it as a 2D projection onto a computer screen where users need to rotate the image to expose the full 3D structure. This process is mentally taxing, breaks down if you stop the rotation, and does not take advantage of the eye's full field of view. ConfocalVR exploits consumer-grade virtual reality (VR) systems to fully immerse the user in the 3D cellular image. In this virtual environment the user can: 1) adjust image viewing parameters without leaving the virtual space, 2) reach out and grab the image to quickly rotate and scale the image to focus on key features, and 3) interact with other users in a shared virtual space enabling real-time collaborative exploration and discussion. We found that immersive VR technology allows the user to rapidly understand cellular architecture and protein or molecule distribution. We note that it is impossible to understand the value of immersive visualization without experiencing it first hand, so we encourage readers to get access to a VR system, download this software, and evaluate it for yourself. The ConfocalVR software is available for download at http://www.confocalvr.com, and is free for nonprofits. Copyright © 2018. Published by Elsevier Ltd.
Development and comparison of projection and image space 3D nodule insertion techniques
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan
2016-04-01
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
[Virtual bronchoscopy: the correlation between endoscopic simulation and bronchoscopic findings].
Salvolini, L; Gasparini, S; Baldelli, S; Bichi Secchi, E; Amici, F
1997-11-01
We carried out a preliminary clinical validation of 3D spiral CT virtual endoscopic reconstructions of the tracheobronchial tree, by comparing virtual bronchoscopic images with actual endoscopic findings. Twenty-two patients with tracheobronchial disease suspected at preliminary clinical, cytopathological and plain chest film findings were submitted to spiral CT of the chest and bronchoscopy. CT was repeated after endobronchial therapy in 2 cases. Virtual endoscopic shaded-surface-display views of the tracheobronchial tree were reconstructed from reformatted CT data with an Advantage Navigator software. Virtual bronchoscopic images were preliminarily evaluated with a semi-quantitative quality score (excellent/good/fair/poor). The depiction of consecutive airway branches was then considered. Virtual bronchoscopies were finally submitted to double-blind comparison with actual endoscopies. Virtual image quality was considered excellent in 8 cases, good in 14 and fair in 2. Virtual exploration was stopped at the lobar bronchi in one case only; the origin of segmental bronchi was depicted in 23 cases and that of some subsegmental branches in 2 cases. Agreement between actual and virtual bronchoscopic findings was good in all cases but 3 where it was nevertheless considered satisfactory. The yield of clinically useful information differed in 8/24 cases: virtual reconstructions provided more information than bronchoscopy in 5 cases and vice versa in 3. Virtual reconstructions are limited in that the procedure is long and difficult and needing a strictly standardized threshold value not to alter virtual findings. Moreover, the reconstructed surface lacks transparency, there is the partial volume effect and the branches < or = 4 pixels phi and/or meandering ones are difficult to explore. Our preliminary data are encouraging. Segmental bronchi were depicted in nearly all cases, except for the branches involved by disease. Obstructing lesions could be bypassed in some cases, making an indication for endoscopic laser therapy. Future didactic perspectives and applications to minimally invasive or virtual reality-assisted therapy seem promising, even though actual clinical applications require further studies.
NASA Astrophysics Data System (ADS)
McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.
2017-12-01
Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object's shape, shadows, and depth information. The distortions shown in the image are due to the rendering of the stereoscopic data into a 2D image for the purposes of taking screenshots.
Three-dimensional face model reproduction method using multiview images
NASA Astrophysics Data System (ADS)
Nagashima, Yoshio; Agawa, Hiroshi; Kishino, Fumio
1991-11-01
This paper describes a method of reproducing three-dimensional face models using multi-view images for a virtual space teleconferencing system that achieves a realistic visual presence for teleconferencing. The goal of this research, as an integral component of a virtual space teleconferencing system, is to generate a three-dimensional face model from facial images, synthesize images of the model virtually viewed from different angles, and with natural shadow to suit the lighting conditions of the virtual space. The proposed method is as follows: first, front and side view images of the human face are taken by TV cameras. The 3D data of facial feature points are obtained from front- and side-views by an image processing technique based on the color, shape, and correlation of face components. Using these 3D data, the prepared base face models, representing typical Japanese male and female faces, are modified to approximate the input facial image. The personal face model, representing the individual character, is then reproduced. Next, an oblique view image is taken by TV camera. The feature points of the oblique view image are extracted using the same image processing technique. A more precise personal model is reproduced by fitting the boundary of the personal face model to the boundary of the oblique view image. The modified boundary of the personal face model is determined by using face direction, namely rotation angle, which is detected based on the extracted feature points. After the 3D model is established, the new images are synthesized by mapping facial texture onto the model.
Application of Virtual Navigation with Multimodality Image Fusion in Foramen Ovale Cannulation.
Qiu, Xixiong; Liu, Weizong; Zhang, Mingdong; Lin, Hengzhou; Zhou, Shoujun; Lei, Yi; Xia, Jun
2017-11-01
Idiopathic trigeminal neuralgia (ITN) can be effectively treated with radiofrequency thermocoagulation. However, this procedure requires cannulation of the foramen ovale, and conventional cannulation methods are associated with high failure rates. Multimodality imaging can improve the accuracy of cannulation because each imaging method can compensate for the drawbacks of the other. We aim to determine the feasibility and accuracy of percutaneous foramen ovale cannulation under the guidance of virtual navigation with multimodality image fusion in a self-designed anatomical model of human cadaveric heads. Five cadaveric head specimens were investigated in this study. Spiral computed tomography (CT) scanning clearly displayed the foramen ovale in all five specimens (10 foramina), which could not be visualized using two-dimensional ultrasound alone. The ultrasound and spiral CT images were fused, and percutaneous cannulation of the foramen ovale was performed under virtual navigation. After this, spiral CT scanning was immediately repeated to confirm the accuracy of the cannulation. Postprocedural spiral CT confirmed that the ultrasound and CT images had been successfully fused for all 10 foramina, which were accurately and successfully cannulated. The success rates of both image fusion and cannulation were 100%. Virtual navigation with multimodality image fusion can substantially facilitate foramen ovale cannulation and is worthy of clinical application. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Teistler, M; Breiman, R S; Lison, T; Bott, O J; Pretschner, D P; Aziz, A; Nowinski, W L
2008-10-01
Volumetric imaging (computed tomography and magnetic resonance imaging) provides increased diagnostic detail but is associated with the problem of navigation through large amounts of data. In an attempt to overcome this problem, a novel 3D navigation tool has been designed and developed that is based on an alternative input device. A 3D mouse allows for simultaneous definition of position and orientation of orthogonal or oblique multiplanar reformatted images or slabs, which are presented within a virtual 3D scene together with the volume-rendered data set and additionally as 2D images. Slabs are visualized with maximum intensity projection, average intensity projection, or standard volume rendering technique. A prototype has been implemented based on PC technology that has been tested by several radiologists. It has shown to be easily understandable and usable after a very short learning phase. Our solution may help to fully exploit the diagnostic potential of volumetric imaging by allowing for a more efficient reading process compared to currently deployed solutions based on conventional mouse and keyboard.
Siri, Sangeeta K; Latte, Mrityunjaya V
2017-11-01
Many different diseases can occur in the liver, including infections such as hepatitis, cirrhosis, cancer and over effect of medication or toxins. The foremost stage for computer-aided diagnosis of liver is the identification of liver region. Liver segmentation algorithms extract liver image from scan images which helps in virtual surgery simulation, speedup the diagnosis, accurate investigation and surgery planning. The existing liver segmentation algorithms try to extort exact liver image from abdominal Computed Tomography (CT) scan images. It is an open problem because of ambiguous boundaries, large variation in intensity distribution, variability of liver geometry from patient to patient and presence of noise. A novel approach is proposed to meet challenges in extracting the exact liver image from abdominal CT scan images. The proposed approach consists of three phases: (1) Pre-processing (2) CT scan image transformation to Neutrosophic Set (NS) and (3) Post-processing. In pre-processing, the noise is removed by median filter. The "new structure" is designed to transform a CT scan image into neutrosophic domain which is expressed using three membership subset: True subset (T), False subset (F) and Indeterminacy subset (I). This transform approximately extracts the liver image structure. In post processing phase, morphological operation is performed on indeterminacy subset (I) and apply Chan-Vese (C-V) model with detection of initial contour within liver without user intervention. This resulted in liver boundary identification with high accuracy. Experiments show that, the proposed method is effective, robust and comparable with existing algorithm for liver segmentation of CT scan images. Copyright © 2017 Elsevier B.V. All rights reserved.
Wang, Yu; Helminen, Emily; Jiang, Jingfeng
2015-01-01
Purpose: Quasistatic ultrasound elastography (QUE) is being used to augment in vivo characterization of breast lesions. Results from early clinical trials indicated that there was a lack of confidence in image interpretation. Such confidence can only be gained through rigorous imaging tests using complex, heterogeneous but known media. The objective of this study is to build a virtual breast QUE simulation platform in the public domain that can be used not only for innovative QUE research but also for rigorous imaging tests. Methods: The main thrust of this work is to streamline biomedical ultrasound simulations by leveraging existing open source software packages including Field II (ultrasound simulator), VTK (geometrical visualization and processing), FEBio [finite element (FE) analysis], and Tetgen (mesh generator). However, integration of these open source packages is nontrivial and requires interdisciplinary knowledge. In the first step, a virtual breast model containing complex anatomical geometries was created through a novel combination of image-based landmark structures and randomly distributed (small) structures. Image-based landmark structures were based on data from the NIH Visible Human Project. Subsequently, an unstructured FE-mesh was created by Tetgen. In the second step, randomly positioned point scatterers were placed within the meshed breast model through an octree-based algorithm to make a virtual breast ultrasound phantom. In the third step, an ultrasound simulator (Field II) was used to interrogate the virtual breast phantom to obtain simulated ultrasound echo data. Of note, tissue deformation generated using a FE-simulator (FEBio) was the basis of deforming the original virtual breast phantom in order to obtain the postdeformation breast phantom for subsequent ultrasound simulations. Using the procedures described above, a full cycle of QUE simulations involving complex and highly heterogeneous virtual breast phantoms can be accomplished for the first time. Results: Representative examples were used to demonstrate capabilities of this virtual simulation platform. In the first set of three ultrasound simulation examples, three heterogeneous volumes of interest were selected from a virtual breast ultrasound phantom to perform sophisticated ultrasound simulations. These resultant B-mode images realistically represented the underlying complex but known media. In the second set of three QUE examples, advanced applications in QUE were simulated. The first QUE example was to show breast tumors with complex shapes and/or compositions. The resultant strain images showed complex patterns that were normally seen in freehand clinical ultrasound data. The second and third QUE examples demonstrated (deformation-dependent) nonlinear strain imaging and time-dependent strain imaging, respectively. Conclusions: The proposed virtual QUE platform was implemented and successfully tested in this study. Through show-case examples, the proposed work has demonstrated its capabilities of creating sophisticated QUE data in a way that cannot be done through the manufacture of physical tissue-mimicking phantoms and other software. This open software architecture will soon be made available in the public domain and can be readily adapted to meet specific needs of different research groups to drive innovations in QUE. PMID:26328994
Building an Open-source Simulation Platform of Acoustic Radiation Force-based Breast Elastography
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-01-01
Ultrasound-based elastography including strain elastography (SE), acoustic radiation force Impulse (ARFI) imaging, point shear wave elastography (pSWE) and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. “ground truth”) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity – one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments. PMID:28075330
Building an open-source simulation platform of acoustic radiation force-based breast elastography
NASA Astrophysics Data System (ADS)
Wang, Yu; Peng, Bo; Jiang, Jingfeng
2017-03-01
Ultrasound-based elastography including strain elastography, acoustic radiation force impulse (ARFI) imaging, point shear wave elastography and supersonic shear imaging (SSI) have been used to differentiate breast tumors among other clinical applications. The objective of this study is to extend a previously published virtual simulation platform built for ultrasound quasi-static breast elastography toward acoustic radiation force-based breast elastography. Consequently, the extended virtual breast elastography simulation platform can be used to validate image pixels with known underlying soft tissue properties (i.e. ‘ground truth’) in complex, heterogeneous media, enhancing confidence in elastographic image interpretations. The proposed virtual breast elastography system inherited four key components from the previously published virtual simulation platform: an ultrasound simulator (Field II), a mesh generator (Tetgen), a finite element solver (FEBio) and a visualization and data processing package (VTK). Using a simple message passing mechanism, functionalities have now been extended to acoustic radiation force-based elastography simulations. Examples involving three different numerical breast models with increasing complexity—one uniform model, one simple inclusion model and one virtual complex breast model derived from magnetic resonance imaging data, were used to demonstrate capabilities of this extended virtual platform. Overall, simulation results were compared with the published results. In the uniform model, the estimated shear wave speed (SWS) values were within 4% compared to the predetermined SWS values. In the simple inclusion and the complex breast models, SWS values of all hard inclusions in soft backgrounds were slightly underestimated, similar to what has been reported. The elastic contrast values and visual observation show that ARFI images have higher spatial resolution, while SSI images can provide higher inclusion-to-background contrast. In summary, our initial results were consistent with our expectations and what have been reported in the literature. The proposed (open-source) simulation platform can serve as a single gateway to perform many elastographic simulations in a transparent manner, thereby promoting collaborative developments.
Transition of a dental histology course from light to virtual microscopy.
Weaker, Frank J; Herbert, Damon C
2009-10-01
The transition of the dental histology course at the University of Texas Health Science Center at San Antonio Dental School was completed gradually over a five-year period. A pilot project was initially conducted to study the feasibility of integrating virtual microscopy into a traditional light microscopic lecture and laboratory course. Because of the difficulty of procuring quality calcified and decalcified sections of teeth, slides from the student loan collection in the oral histology block of the course were outsourced for conversion to digital images and placed on DVDs along with a slide viewer. The slide viewer mimicked the light microscope, allowing horizontal and vertical movement and changing of magnification, and, in addition, a feature to capture static images. In a survey, students rated the ease of use of the software, quality of the images, maneuverability of the images, and questions regarding use of the software, effective use of laboratory, and faculty time. Because of the positive support from the students, our entire student loan collection of 153 glass slides was subsequently converted to virtual images and distributed on an Apricorn pocket external hard drive. Students were asked to assess the virtual microscope over a four-year period. As a result of the surveys, light microscopes have been totally eliminated, and microscope exams have been replaced with project slide examinations. In the future, we plan to expand our virtual slides and incorporate computer testing.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Repeatability and Reproducibility of Virtual Subjective Refraction.
Perches, Sara; Collados, M Victoria; Ares, Jorge
2016-10-01
To establish the repeatability and reproducibility of a virtual refraction process using simulated retinal images. With simulation software, aberrated images corresponding with each step of the refraction process were calculated following the typical protocol of conventional subjective refraction. Fifty external examiners judged simulated retinal images until the best sphero-cylindrical refraction and the best visual acuity were achieved starting from the aberrometry data of three patients. Data analyses were performed to assess repeatability and reproducibility of the virtual refraction as a function of pupil size and aberrometric profile of different patients. SD values achieved in three components of refraction (M, J0, and J45) are lower than 0.25D in repeatability analysis. Regarding reproducibility, we found SD values lower than 0.25D in the most cases. When the results of virtual refraction with different pupil diameters (4 and 6 mm) were compared, the mean of differences (MoD) obtained were not clinically significant (less than 0.25D). Only one of the aberrometry profiles with high uncorrected astigmatism shows poor results for the M component in reproducibility and pupil size dependence analysis. In all cases, vision achieved was better than 0 logMAR. A comparison between the compensation obtained with virtual and conventional subjective refraction was made as an example of this application, showing good quality retinal images in both processes. The present study shows that virtual refraction has similar levels of precision as conventional subjective refraction. Moreover, virtual refraction has also shown that when high low order astigmatism is present, the refraction result is less precise and highly dependent on pupil size.
Computer Vision Assisted Virtual Reality Calibration
NASA Technical Reports Server (NTRS)
Kim, W.
1999-01-01
A computer vision assisted semi-automatic virtual reality (VR) calibration technology has been developed that can accurately match a virtual environment of graphically simulated three-dimensional (3-D) models to the video images of the real task environment.
Liu, Xiujuan; Tao, Haiquan; Xiao, Xigang; Guo, Binbin; Xu, Shangcai; Sun, Na; Li, Maotong; Xie, Li; Wu, Changjun
2018-07-01
This study aimed to compare the diagnostic performance of the stereoscopic virtual reality display system with the conventional computed tomography (CT) workstation and three-dimensional rotational angiography (3DRA) for intracranial aneurysm detection and characterization, with a focus on small aneurysms and those near the bone. First, 42 patients with suspected intracranial aneurysms underwent both 256-row CT angiography (CTA) and 3DRA. Volume rendering (VR) images were captured using the conventional CT workstation. Next, VR images were transferred to the stereoscopic virtual reality display system. Two radiologists independently assessed the results that were obtained using the conventional CT workstation and stereoscopic virtual reality display system. The 3DRA results were considered as the ultimate reference standard. Based on 3DRA images, 38 aneurysms were confirmed in 42 patients. Two cases were misdiagnosed and 1 was missed when the traditional CT workstation was used. The sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy of the conventional CT workstation were 94.7%, 85.7%, 97.3%, 75%, and99.3%, respectively, on a per-aneurysm basis. The stereoscopic virtual reality display system missed a case. The sensitivity, specificity, PPV, NPV, and accuracy of the stereoscopic virtual reality display system were 100%, 85.7%, 97.4%, 100%, and 97.8%, respectively. No difference was observed in the accuracy of the traditional CT workstation, stereoscopic virtual reality display system, and 3DRA in detecting aneurysms. The stereoscopic virtual reality display system has some advantages in detecting small aneurysms and those near the bone. The virtual reality stereoscopic vision obtained through the system was found as a useful tool in intracranial aneurysm diagnosis and pre-operative 3D imaging. Copyright © 2018 Elsevier B.V. All rights reserved.
Head Mounted Displays for Virtual Reality
1993-02-01
Produce an Image of Infinity 9 3 The Naval Ocean Systems Center HMD with Front-Mounted CRTs 10 4 The VR Group HMD with Side-Mounted CRTs. The Image is...Convergence Angles 34 vii SECTION 1 INTRODUCTION One of the goals in the development of Virtual Reality ( VR ) is to achieve "total immersion" where one...become transported out of the real world and into the virtual world. The developers of VR have utilized the head mounted display (HMD) as a means of
Virtual-reality-based system for controlled study of cataplexy
NASA Astrophysics Data System (ADS)
Augustine, Kurt E.; Cameron, Bruce M.; Camp, Jon J.; Krahn, Lois E.; Robb, Richard A.
2002-05-01
Cataplexy is a sudden loss of voluntary muscle control experienced by narcolepsy patients. It is usually triggered by strong, spontaneous emotions and is more common in times of stress. The Sleep Disorders Unit and the Biomedical Imaging Resource at Mayo Clinic are developing interactive display technology for reliably inducing cataplexy during clinical monitoring. The project is referred to as the Cataplexy/Narcolepsy Activation Program, or CatNAP. We have developed an automobile driving simulation that introduces humorous, surprising, and stress-inducing events and objects as the patient attempts to navigate a vehicle through a virtual town. The patient wears a head-mounted display and controls the vehicle via a driving simulator steering wheel and pedal cluster. As the patient attempts to drive through the town, various objects, sounds or conditions occur that distract, startle, frustrate or amuse. These responses may trigger a cataplectic episode, which can then be clinically evaluated. We believe CatNAP is a novel and innovative example of the effective application of virtual reality technology to study an important clinical problem that has resisted previous approaches. An evaluation phase with volunteer patients previously diagnosed with cataplexy has been completed. The prototype system is being prepared for a full clinical study.
A study of factors affecting the adoption of server virtualization technology
NASA Astrophysics Data System (ADS)
Lu, Hsin-Ke; Lin, Peng-Chun; Chiang, Chang-Heng; Cho, Chien-An
2018-04-01
It has become a trend that worldwide enterprises and organizations apply new technologies to improve their operations; besides, it has higher cost and less flexibility to construct and manage traditional servers, therefore the current mainstream is to use server virtualization technology. However, from these new technology organizations will not necessarily get the expected benefits because each one has its own level of organizational complexity and abilities to accept changes. The researcher investigated key factors affecting the adoption of virtualization technology through two phases. In phase I, the researcher reviewed literature and then applied the dimensions of "Information Systems Success Model" (ISSM) to generalize the factors affecting the adoption of virtualization technology to be the preliminary theoretical framework and develop a questionnaire; in phase II, a three-round Delphi Method was used to integrate the opinions of experts from related fields which were then gradually converged in order to obtain a stable and objective questionnaire of key factors so that these results were expected to provide references for organizations' adoption of server virtualization technology and future studies.
Real-time fusion of endoscopic views with dynamic 3-D cardiac images: a phantom study.
Szpala, Stanislaw; Wierzbicki, Marcin; Guiraudon, Gerard; Peters, Terry M
2005-09-01
Minimally invasive robotically assisted cardiac surgical systems currently do not routinely employ 3-D image guidance. However, preoperative magnetic resonance and computed tomography (CT) images have the potential to be used in this role, if appropriately registered with the patient anatomy and animated synchronously with the motion of the actual heart. This paper discusses the fusion of optical images of a beating heart phantom obtained from an optically tracked endoscope, with volumetric images of the phantom created from a dynamic CT dataset. High quality preoperative dynamic CT images are created by first extracting the motion parameters of the heart from the series of temporal frames, and then applying this information to animate a high-quality heart image acquired at end systole. Temporal synchronization of the endoscopic and CT model is achieved by selecting the appropriate CT image from the dynamic set, based on an electrocardiographic trigger signal. The spatial error between the optical and virtual images is 1.4 +/- 1.1 mm, while the time discrepancy is typically 50-100 ms. Index Terms-Image guidance, image warping, minimally invasive cardiac surgery, virtual endoscopy, virtual reality.
ERIC Educational Resources Information Center
Bergren, Martha Dewey
2005-01-01
Frequently, a nurse's first and only contact with a graduate school, legislator, public health official, professional organization, or school nursing colleague is made through e-mail. The format, the content, and the appearance of the e-mail create a virtual first impression. Nurses can manage their image and the image of the profession by…
The Multimission Image Processing Laboratory's virtual frame buffer interface
NASA Technical Reports Server (NTRS)
Wolfe, T.
1984-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied interfaces. This variety of architectures and interfaces creates software development, maintenance and portability problems for application programs. Several machine-dependent graphics standards such as ANSI Core and GKS are available, but none of them are adequate for image processing. Therefore, the Multimission Image Processing laboratory project has implemented a programmer level virtual frame buffer interface. This interface makes all frame buffers appear as a generic frame buffer with a specified set of characteristics. This document defines the virtual frame uffer interface and provides information such as FORTRAN subroutine definitions, frame buffer characteristics, sample programs, etc. It is intended to be used by application programmers and system programmers who are adding new frame buffers to a system.
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org. PMID:26401099
BioImg.org: A Catalog of Virtual Machine Images for the Life Sciences.
Dahlö, Martin; Haziza, Frédéric; Kallio, Aleksi; Korpelainen, Eija; Bongcam-Rudloff, Erik; Spjuth, Ola
2015-01-01
Virtualization is becoming increasingly important in bioscience, enabling assembly and provisioning of complete computer setups, including operating system, data, software, and services packaged as virtual machine images (VMIs). We present an open catalog of VMIs for the life sciences, where scientists can share information about images and optionally upload them to a server equipped with a large file system and fast Internet connection. Other scientists can then search for and download images that can be run on the local computer or in a cloud computing environment, providing easy access to bioinformatics environments. We also describe applications where VMIs aid life science research, including distributing tools and data, supporting reproducible analysis, and facilitating education. BioImg.org is freely available at: https://bioimg.org.
Bowman, Ellen Lambert; Liu, Lei
2017-01-01
Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say "GO" at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON'T WALK). Before training, > 50% of the GO calls from all participants fell in the DON'T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision.
NASA Technical Reports Server (NTRS)
1995-01-01
The 1100C Virtual Window is based on technology developed under NASA Small Business Innovation (SBIR) contracts to Ames Research Center. For example, under one contract Dimension Technologies, Inc. developed a large autostereoscopic display for scientific visualization applications. The Virtual Window employs an innovative illumination system to deliver the depth and color of true 3D imaging. Its applications include surgery and Magnetic Resonance Imaging scans, viewing for teleoperated robots, training, and in aviation cockpit displays.
Ultrasonic imaging of material flaws exploiting multipath information
NASA Astrophysics Data System (ADS)
Shen, Xizhong; Zhang, Yimin D.; Demirli, Ramazan; Amin, Moeness G.
2011-05-01
In this paper, we consider ultrasonic imaging for the visualization of flaws in a material. Ultrasonic imaging is a powerful nondestructive testing (NDT) tool which assesses material conditions via the detection, localization, and classification of flaws inside a structure. Multipath exploitations provide extended virtual array apertures and, in turn, enhance imaging capability beyond the limitation of traditional multisensor approaches. We utilize reflections of ultrasonic signals which occur when encountering different media and interior discontinuities. The waveforms observed at the physical as well as virtual sensors yield additional measurements corresponding to different aspect angles. Exploitation of multipath information addresses unique issues observed in ultrasonic imaging. (1) Utilization of physical and virtual sensors significantly extends the array aperture for image enhancement. (2) Multipath signals extend the angle of view of the narrow beamwidth of the ultrasound transducers, allowing improved visibility and array design flexibility. (3) Ultrasonic signals experience difficulty in penetrating a flaw, thus the aspect angle of the observation is limited unless access to other sides is available. The significant extension of the aperture makes it possible to yield flaw observation from multiple aspect angles. We show that data fusion of physical and virtual sensor data significantly improves the detection and localization performance. The effectiveness of the proposed multipath exploitation approach is demonstrated through experimental studies.
Haji-Momenian, S; Parkinson, W; Khati, N; Brindle, K; Earls, J; Zeman, R K
2018-06-01
To determine the sensitivity, specificity, and predictive values of single-energy non-contrast hepatic steatosis criteria on dual-energy virtual non-contrast (VNC) images. Forty-eight computed tomography (CT) examinations, which included single-energy non-contrast (TNC) and contrast-enhanced dual-energy CT angiography (CTA) of the abdomen, were enrolled. VNC images were reconstructed from the CTA. Region of interest (ROI) attenuations were measured in the right and left hepatic lobes, spleen, and aorta on TNC and VNC images. The right and left hepatic lobes were treated as separate samples. Steatosis was diagnosed based on TNC liver attenuation of ≤40 HU or liver attenuation index (LAI) of ≤-10 HU, which are extremely specific and predictive for moderate to severe steatosis. The sensitivity, specificity, and predictive values of VNC images for steatosis were calculated. VNC-TNC deviations were correlated with aortic enhancement and patient water equivalent diameter (PWED). Thirty-two liver ROIs met steatosis criteria based on TNC attenuation; VNC attenuation had sensitivity, specificity, and a positive predictive value of 66.7%, 100%, and 100%, respectively. Twenty-one liver ROIs met steatosis criteria based on TNC LAI. VNC LAI had sensitivity, specificity, and positive predictive values of 61.9%, 90.7%, and 65%, respectively. Hepatic and splenic VNC-TNC deviations did not correlate with one another (R 2 =0.08), aortic enhancement (R 2 <0.06) or PWED (R 2 <0.09). Non-contrast hepatic attenuation criteria is extremely specific and positively predictive for moderate to severe steatosis on VNC reconstructions from the arterial phase. Hepatic attenuation performs better than LAI criteria. VNC deviations are independent of aortic enhancement and PWED. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Dual-polarity plasmonic metalens for visible light
NASA Astrophysics Data System (ADS)
Chen, Xianzhong; Huang, Lingling; Mühlenbernd, Holger; Li, Guixin; Bai, Benfeng; Tan, Qiaofeng; Jin, Guofan; Qiu, Cheng-Wei; Zhang, Shuang; Zentgraf, Thomas
2012-11-01
Surface topography and refractive index profile dictate the deterministic functionality of a lens. The polarity of most lenses reported so far, that is, either positive (convex) or negative (concave), depends on the curvatures of the interfaces. Here we experimentally demonstrate a counter-intuitive dual-polarity flat lens based on helicity-dependent phase discontinuities for circularly polarized light. Specifically, by controlling the helicity of the input light, the positive and negative polarity are interchangeable in one identical flat lens. Helicity-controllable real and virtual focal planes, as well as magnified and demagnified imaging, are observed on the same plasmonic lens at visible and near-infrared wavelengths. The plasmonic metalens with dual polarity may empower advanced research and applications in helicity-dependent focusing and imaging devices, angular-momentum-based quantum information processing and integrated nano-optoelectronics.
Human-machine interface for a VR-based medical imaging environment
NASA Astrophysics Data System (ADS)
Krapichler, Christian; Haubner, Michael; Loesch, Andreas; Lang, Manfred K.; Englmeier, Karl-Hans
1997-05-01
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
Koizumi, Yohei; Hirooka, Masashi; Ochi, Hironori; Tokumoto, Yoshio; Takechi, Megumi; Hiraoka, Atsushi; Ikeda, Yoshio; Kumagi, Teru; Matsuura, Bunzo; Abe, Masanori; Hiasa, Yoichi
2015-04-01
This study aimed at prospectively evaluating bile duct anatomy on ultrasonography and evaluating the safety and utility of radiofrequency ablation (RFA) assisted by virtual ultrasonography from gadolinium ethoxybenzyl diethylenetriamine pentaacetic acid (Gd-EOB-DTPA)-enhanced magnetic resonance imaging (MRI). The institutional review board approved this study, and patients provided written informed consent prior to entry into the study. Bile duct anatomy was assessed in 201 patients who underwent Gd-EOB-DTPA-enhanced MRI for the evaluation of hepatic tumor. Eighty-one of these patients subsequently underwent RFA assisted by ultrasound imaging. In 23 patients, the tumor was located within 5 mm of the central bile duct, as demonstrated by MRI. Virtual ultrasonography constructed by Gd-EOB-enhanced MRI was able to visualize the common bile duct, left hepatic duct, and right hepatic duct in 96.5, 94.0, and 89.6 % of cases, respectively. The target hepatic tumor nodule and biliary duct could be detected with virtual ultrasonography in all patients, and no severe complications occurred. The running pattern of the bile ducts could be recognized on conventional ultrasound by referencing virtual ultrasonography constructed by Gd-EOB-DTPA-enhanced MRI. RFA assisted by this imaging strategy did not result in bile duct injury.
Lohöfer, Fabian K; Kaissis, Georgios A; Köster, Frances L; Ziegelmayer, Sebastian; Einspieler, Ingo; Gerngross, Carlos; Rasper, Michael; Noel, Peter B; Koerdt, Steffen; Fichter, Andreas; Rummeny, Ernst J; Braren, Rickmer F
2018-05-28
The aim of this study was to evaluate the advantages of dual-layer spectral CT (DLSCT) in detection and staging of head and neck cancer (HNC) as well as the imaging of tumour margins and infiltration depth compared to conventional contrast enhanced CT (CECT). Thirty-nine patients with a proven diagnosis of HNC were examined with a DLSCT scanner and retrospectively analysed. An age-matched healthy control group of the same size was used. Images were acquired in the venous phase. Virtual monoenergetic 40keV-equivalent (MonoE40) images were compared to CECT-images. Diagnostic confidence for tumour identification and margin detection was rated independently by four experienced observers. The steepness of the Hounsfield unit (HU)-increase at the tumour margin was analysed. External carotid artery branch image reconstructions were performed and their contrast compared to conventional arterial phase imaging. Means were compared using a Student's t-test. ANOVA was used for multiple comparisons. MonoE40 images were superior to CECT-images in tumour detection and margin delineation. MonoE40 showed significantly higher attenuation differences between tumour and healthy tissue compared to CECT-images (p < 0.001). The HU-increase at the boundary of the tumour was significantly steeper in MonoE40 images compared to CECT-images (p < 0.001). Iodine uptake in the tumour was significantly higher compared to healthy tissue (p < 0.001). MonoE40 compared to conventional images allowed visualisation of external carotid artery branches from the venous phase in a higher number of cases (87% vs. 67%). DLSCT enables improved detection of primary and recurrent head and neck cancer and quantification of tumour iodine uptake. Improved contrast of MonoE40 compared to conventional reconstructions enables higher diagnostic confidence concerning tumour margin detection and vessel identification. • Sensitivity concerning tumour detection are higher using dual-layer spectral-CT than conventional CT. • Lesion to background contrast in DLSCT is significantly higher than in CECT. • DLSCT provides sufficient contrast for evaluation of external carotid artery branches.
Moschetta, Marco; Telegrafo, Michele; Capuano, Giulia; Rella, Leonarda; Scardapane, Arnaldo; Angelelli, Giuseppe; Stabile Ianora, Amato Antonio
2013-10-01
To assess the contribute of intra-prosthetic MRI virtual navigation for evaluating breast implants and detecting implant ruptures. Forty-five breast implants were evaluated by MR examination. Only patients with a clinical indication were assessed. A 1.5-T device equipped with a 4-channel breast coil was used by performing axial TSE-T2, axial silicone-only, axial silicone suppression and sagittal STIR images. The obtained dicom files were also analyzed by using virtual navigation software. Two blinded radiologists evaluated all MR and virtual images. Eight patients for a total of 13 implants underwent surgical replacement. Sensitivity, specificity, accuracy, positive predictive value (PPV) and negative predictive value (NPV) were calculated for both imaging strategies. Intra-capsular rupture was diagnosed in 13 out of 45 (29%) implants by using MRI. Basing on virtual navigation, 9 (20%) cases of intra-capsular rupture were diagnosed. Sensitivity, specificity, accuracy, PPV and NPV values of 100%, 86%, 89%, 62% and 100%, respectively, were found for MRI. Virtual navigation increased the previous values up to 100%, 97%, 98%, 89% and 100%. Intra-prosthetic breast MR virtual navigation can represent an additional promising tool for the evaluation of breast implants being able to reduce false positives and to provide a more accurate detection of intra-capsular implant rupture signs. Copyright © 2013 Elsevier Inc. All rights reserved.
Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.
Beyl, T; Nicolai, P; Mönnich, H; Raczkowksy, J; Wörn, H
2012-01-01
In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.
Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H
2015-10-01
Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krueger, Evan; Messier, Erik; Linte, Cristian A.; Diaz, Gabriel
2017-03-01
Recent advances in medical image acquisition allow for the reconstruction of anatomies with 3D, 4D, and 5D renderings. Nevertheless, standard anatomical and medical data visualization still relies heavily on the use of traditional 2D didactic tools (i.e., textbooks and slides), which restrict the presentation of image data to a 2D slice format. While these approaches have their merits beyond being cost effective and easy to disseminate, anatomy is inherently three-dimensional. By using 2D visualizations to illustrate more complex morphologies, important interactions between structures can be missed. In practice, such as in the planning and execution of surgical interventions, professionals require intricate knowledge of anatomical complexities, which can be more clearly communicated and understood through intuitive interaction with 3D volumetric datasets, such as those extracted from high-resolution CT or MRI scans. Open source, high quality, 3D medical imaging datasets are freely available, and with the emerging popularity of 3D display technologies, affordable and consistent 3D anatomical visualizations can be created. In this study we describe the design, implementation, and evaluation of one such interactive, stereoscopic visualization paradigm for human anatomy extracted from 3D medical images. A stereoscopic display was created by projecting the scene onto the lab floor using sequential frame stereo projection and viewed through active shutter glasses. By incorporating a PhaseSpace motion tracking system, a single viewer can navigate an augmented reality environment and directly manipulate virtual objects in 3D. While this paradigm is sufficiently versatile to enable a wide variety of applications in need of 3D visualization, we designed our study to work as an interactive game, which allows users to explore the anatomy of various organs and systems. In this study we describe the design, implementation, and evaluation of an interactive and stereoscopic visualization platform for exploring and understanding human anatomy. This system can present medical imaging data in three dimensions and allows for direct physical interaction and manipulation by the viewer. This should provide numerous benefits over traditional, 2D display and interaction modalities, and in our analysis, we aim to quantify and qualify users' visual and motor interactions with the virtual environment when employing this interactive display as a 3D didactic tool.
SU-F-303-12: Implementation of MR-Only Simulation for Brain Cancer: A Virtual Clinical Trial
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glide-Hurst, C; Zheng, W; Kim, J
2015-06-15
Purpose: To perform a retrospective virtual clinical trial using an MR-only workflow for a variety of brain cancer cases by incorporating novel imaging sequences, tissue segmentation using phase images, and an innovative synthetic CT (synCT) solution. Methods: Ten patients (16 lesions) were evaluated using a 1.0T MR-SIM including UTE-DIXON imaging (TE = 0.144/3.4/6.9ms). Bone-enhanced images were generated from DIXON-water/fat and inverted UTE. Automated air segmentation was performed using unwrapped UTE phase maps. Segmentation accuracy was assessed by calculating intersection and Dice similarity coefficients (DSC) using CT-SIM as ground truth. SynCTs were generated using voxel-based weighted summation incorporating T2, FLAIR, UTE1,more » and bone-enhanced images. Mean absolute error (MAE) characterized HU differences between synCT and CT-SIM. Dose was recalculated on synCTs; differences were quantified using planar gamma analysis (2%/2 mm dose difference/distance to agreement) at isocenter. Digitally reconstructed radiographs (DRRs) were compared. Results: On average, air maps intersected 80.8 ±5.5% (range: 71.8–88.8%) between MR-SIM and CT-SIM yielding DSCs of 0.78 ± 0.04 (range: 0.70–0.83). Whole-brain MAE between synCT and CT-SIM was 160.7±8.8 HU, with the largest uncertainty arising from bone (MAE = 423.3±33.2 HU). Gamma analysis revealed pass rates of 99.4 ± 0.04% between synCT and CT-SIM for the cohort. Dose volume histogram analysis revealed that synCT tended to yield slightly higher doses. Organs at risk such as the chiasm and optic nerves were most sensitive due to their proximities to air/bone interfaces. DRRs generated via synCT and CT-SIM were within clinical tolerances. Conclusion: Our approach for MR-only simulation for brain cancer treatment planning yielded clinically acceptable results relative to the CT-based benchmark. While slight dose differences were observed, reoptimization of treatment plans and improved image registration can address this limitation. Future work will incorporate automated registration between setup images (cone-beam CT and kilovoltage images) for synCT and CT-SIM. Submitting institution holds research agreements with Philips HealthCare, Best, Netherlands and Varian Medical Systems, Palo Alto, CA. Research partially sponsored via an Internal Mentored Research Grant.« less
Analysis towards VMEM File of a Suspended Virtual Machine
NASA Astrophysics Data System (ADS)
Song, Zheng; Jin, Bo; Sun, Yongqing
With the popularity of virtual machines, forensic investigators are challenged with more complicated situations, among which discovering the evidences in virtualized environment is of significant importance. This paper mainly analyzes the file suffixed with .vmem in VMware Workstation, which stores all pseudo-physical memory into an image. The internal file structure of .vmem file is studied and disclosed. Key information about processes and threads of a suspended virtual machine is revealed. Further investigation into the Windows XP SP3 heap contents is conducted and a proof-of-concept tool is provided. Different methods to obtain forensic memory images are introduced, with both advantages and limits analyzed. We conclude with an outlook.
ERIC Educational Resources Information Center
Gunn, Therese; Jones, Lee; Bridge, Pete; Rowntree, Pam; Nissen, Lisa
2018-01-01
In recent years, simulation has increasingly underpinned the acquisition of pre-clinical skills by undergraduate medical imaging (diagnostic radiography) students. This project aimed to evaluate the impact of an innovative virtual reality (VR) learning environment on the development of technical proficiency by students. The study assessed the…
Shinbane, Jerold S; Saxon, Leslie A
Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
Clinical applications of virtual navigation bronchial intervention.
Kajiwara, Naohiro; Maehara, Sachio; Maeda, Junichi; Hagiwara, Masaru; Okano, Tetsuya; Kakihana, Masatoshi; Ohira, Tatsuo; Kawate, Norihiko; Ikeda, Norihiko
2018-01-01
In patients with bronchial tumors, we frequently consider endoscopic treatment as the first treatment of choice. All computed tomography (CT) must satisfy several conditions necessary to analyze images by Synapse Vincent. To select safer and more precise approaches for patients with bronchial tumors, we determined the indications and efficacy of virtual navigation intervention for the treatment of bronchial tumors. We examined the efficacy of virtual navigation bronchial intervention for the treatment of bronchial tumors located at a variety of sites in the tracheobronchial tree using a high-speed 3-dimensional (3D) image analysis system, Synapse Vincent. Constructed images can be utilized to decide on the simulation and interventional strategy as well as for navigation during interventional manipulation in two cases. Synapse Vincent was used to determine the optimal planning of virtual navigation bronchial intervention. Moreover, this system can detect tumor location and alsodepict surrounding tissues, quickly, accurately, and safely. The feasibility and safety of Synapse Vincent in performing useful preoperative simulation and navigation of surgical procedures can lead to safer, more precise, and less invasion for the patient, and makes it easy to construct an image, depending on the purpose, in 5-10 minutes using Synapse Vincent. Moreover, if the lesion is in the parenchyma or sub-bronchial lumen, it helps to perform simulation with virtual skeletal subtraction to estimate potential lesion movement. By using virtual navigation system for simulation, bronchial intervention was performed with no complications safely and precisely. Preoperative simulation using virtual navigation bronchial intervention reduces the surgeon's stress levels, particularly when highly skilled techniques are needed to operate on lesions. This task, including both preoperative simulation and intraoperative navigation, leads to greater safety and precision. These technological instruments are helpful for bronchial intervention procedures, and are also excellent devices for educational training.
NASA Astrophysics Data System (ADS)
Yang, Y.
2013-12-01
Since the emerging of ambient noise tomography in 2005, it has become a well-established method and been applied all over the world to imaging crustal and uppermost mantle structures because of its exclusive capability to extract short period surface waves. Most studies of ambient noise tomography performed so far use surface waves at periods shorter than 40/50 sec. There are a few studies of long period surface wave tomography from ambient noise (longer than 50 sec) in continental and global scales. To our knowledge, almost no tomography studies have been performed using long period surface waves (~50-200 sec) from ambient noise in regional scales with an aperture of several hundred kilometres. In this study, we demonstrate the capability of using long period surface waves from ambient noise in regional surface wave tomography by showing a case study of western USA using the USArray Transportable component (TA). We select about 150 TA stations located in a region including northern California, northern Nevada and Oregon as the 'base' stations and about 200 stations from Global Seismographic Network (GSN) and The International Federation of Digital Seismograph Networks (FDSN) as the 'remote' stations. We perform monthly cross-correlations of continuous ambient noise data recorded in 2006-2008 between the 'base' stations and the 'remote' stations and then use a stacking method based on instantaneous phase coherence to stack the monthly cross-correlations to obtain the final cross-correlations. The results show that high signal-to-noise ratio long period Raleigh waves are obtained between the 'base' stations and 'remote' stations located several thousand or even more than ten thousand kilometres away from the 'base' stations. By treating each of the 'remote' station as a 'virtual' teleseismic earthquake and measuring surface wave phases at the 'base' stations, we generate phase velocity maps at 50-200 sec periods in the regions covered by the 'base' stations using an array-based two-plane-wave tomography method. To evaluate the reliability of the resulting phase velocity maps, we compare them with published phase velocity maps using the same tomography method but based on teleseismic data. The comparison shows that long period surface wave phase velocity maps based 'virtual' events from ambient noise and those based on natural earthquakes are very similar with differences within the range of uncertainties. The similarity of phase velocity maps justifies the application of long period surface waves from ambient noise in regional lithosphere imaging. The successful extraction of long period surface waves between station pairs with distances as long as several thousand or ten thousand kilometres can link seismic arrays located in different continents, such as CEArray in China and USArray in USA. With the rapid developments of large scale seismic arrays in different continents, those inter-continental surface waves from ambient noise can be incorporated in both regional- and global-scale surface wave tomography to significantly increase the path coverage in both lateral and azimuthal senses, which is essential to improving imaging of high resolution heterogeneities and azimuthal anisotropy, especially at regions with gaps of azimuthal distributions of earthquakes.
Liu, Kaijun; Fang, Binji; Wu, Yi; Li, Ying; Jin, Jun; Tan, Liwen; Zhang, Shaoxiang
2013-09-01
Anatomical knowledge of the larynx region is critical for understanding laryngeal disease and performing required interventions. Virtual reality is a useful method for surgical education and simulation. Here, we assembled segmented cross-section slices of the larynx region from the Chinese Visible Human dataset. The laryngeal structures were precisely segmented manually as 2D images, then reconstructed and displayed as 3D images in the virtual reality Dextrobeam system. Using visualization and interaction with the virtual reality modeling language model, a digital laryngeal anatomy instruction was constructed using HTML and JavaScript languages. The volume larynx models can thus display an arbitrary section of the model and provide a virtual dissection function. This networked teaching system of the digital laryngeal anatomy can be read remotely, displayed locally, and manipulated interactively.
Uchida, Masafumi
2014-04-01
A few years ago it could take several hours to complete a 3D image using a 3D workstation. Thanks to advances in computer science, obtaining results of interest now requires only a few minutes. Many recent 3D workstations or multimedia computers are equipped with onboard 3D virtual patient modeling software, which enables patient-specific preoperative assessment and virtual planning, navigation, and tool positioning. Although medical 3D imaging can now be conducted using various modalities, including computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and ultrasonography (US) among others, the highest quality images are obtained using CT data, and CT images are now the most commonly used source of data for 3D simulation and navigation image. If the 2D source image is bad, no amount of 3D image manipulation in software will provide a quality 3D image. In this exhibition, the recent advances in CT imaging technique and 3D visualization of the hepatobiliary and pancreatic abnormalities are featured, including scan and image reconstruction technique, contrast-enhanced techniques, new application of advanced CT scan techniques, and new virtual reality simulation and navigation imaging. © 2014 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Virtual Raters for Reproducible and Objective Assessments in Radiology
NASA Astrophysics Data System (ADS)
Kleesiek, Jens; Petersen, Jens; Döring, Markus; Maier-Hein, Klaus; Köthe, Ullrich; Wick, Wolfgang; Hamprecht, Fred A.; Bendszus, Martin; Biller, Armin
2016-04-01
Volumetric measurements in radiologic images are important for monitoring tumor growth and treatment response. To make these more reproducible and objective we introduce the concept of virtual raters (VRs). A virtual rater is obtained by combining knowledge of machine-learning algorithms trained with past annotations of multiple human raters with the instantaneous rating of one human expert. Thus, he is virtually guided by several experts. To evaluate the approach we perform experiments with multi-channel magnetic resonance imaging (MRI) data sets. Next to gross tumor volume (GTV) we also investigate subcategories like edema, contrast-enhancing and non-enhancing tumor. The first data set consists of N = 71 longitudinal follow-up scans of 15 patients suffering from glioblastoma (GB). The second data set comprises N = 30 scans of low- and high-grade gliomas. For comparison we computed Pearson Correlation, Intra-class Correlation Coefficient (ICC) and Dice score. Virtual raters always lead to an improvement w.r.t. inter- and intra-rater agreement. Comparing the 2D Response Assessment in Neuro-Oncology (RANO) measurements to the volumetric measurements of the virtual raters results in one-third of the cases in a deviating rating. Hence, we believe that our approach will have an impact on the evaluation of clinical studies as well as on routine imaging diagnostics.
Lee, Whal; Kim, Ho Sung; Kim, Seok Jung; Kim, Hyung Ho; Chung, Jin Wook; Kang, Heung Sik; Choi, Ja-Young
2004-01-01
Objective To determine the diagnostic accuracy of CT arthrography and virtual arthroscopy in the diagnosis of anterior cruciate ligament and meniscus pathology. Materials and Methods Thirty-eight consecutive patients who underwent CT arthrography and arthroscopy of the knee were included in this study. The ages of the patients ranged from 19 to 52 years and all of the patients were male. Sagittal, coronal, transverse and oblique coronal multiplanar reconstruction images were reformatted from CT arthrography. Virtual arthroscopy was performed from 6 standard views using a volume rendering technique. Three radiologists analyzed the MPR images and two orthopedic surgeons analyzed the virtual arthroscopic images. Results The sensitivity and specificity of CT arthrography for the diagnosis of anterior cruciate ligament abnormalities were 87.5%-100% and 93.3-96.7%, respectively, and those for meniscus abnormalities were 91.7%-100% and 98.1%, respectively. The sensitivity and specificity of virtual arthroscopy for the diagnosis of anterior cruciate ligament abnormalities were 87.5% and 83.3-90%, respectively, and those for meniscus abnormalities were 83.3%-87.5% and 96.1-98.1%, respectively. Conclusion CT arthrography and virtual arthroscopy showed good diagnostic accuracy for anterior cruciate ligament and meniscal abnormalities. PMID:15064559
Direct Visuo-Haptic 4D Volume Rendering Using Respiratory Motion Models.
Fortmeier, Dirk; Wilms, Matthias; Mastmeyer, Andre; Handels, Heinz
2015-01-01
This article presents methods for direct visuo-haptic 4D volume rendering of virtual patient models under respiratory motion. Breathing models are computed based on patient-specific 4D CT image data sequences. Virtual patient models are visualized in real-time by ray casting based rendering of a reference CT image warped by a time-variant displacement field, which is computed using the motion models at run-time. Furthermore, haptic interaction with the animated virtual patient models is provided by using the displacements computed at high rendering rates to translate the position of the haptic device into the space of the reference CT image. This concept is applied to virtual palpation and the haptic simulation of insertion of a virtual bendable needle. To this aim, different motion models that are applicable in real-time are presented and the methods are integrated into a needle puncture training simulation framework, which can be used for simulated biopsy or vessel puncture in the liver. To confirm real-time applicability, a performance analysis of the resulting framework is given. It is shown that the presented methods achieve mean update rates around 2,000 Hz for haptic simulation and interactive frame rates for volume rendering and thus are well suited for visuo-haptic rendering of virtual patients under respiratory motion.
Renaud, Patrice; Joyal, Christian; Stoleru, Serge; Goyette, Mathieu; Weiskopf, Nikolaus; Birbaumer, Niels
2011-01-01
This chapter proposes a prospective view on using a real-time functional magnetic imaging (rt-fMRI) brain-computer interface (BCI) application as a new treatment for pedophilia. Neurofeedback mediated by interactive virtual stimuli is presented as the key process in this new BCI application. Results on the diagnostic discriminant power of virtual characters depicting sexual stimuli relevant to pedophilia are given. Finally, practical and ethical implications are briefly addressed. Copyright © 2011 Elsevier B.V. All rights reserved.
Phase unwrapping with a virtual Hartmann-Shack wavefront sensor.
Akondi, Vyas; Falldorf, Claas; Marcos, Susana; Vohnsen, Brian
2015-10-05
The use of a spatial light modulator for implementing a digital phase-shifting (PS) point diffraction interferometer (PDI) allows tunability in fringe spacing and in achieving PS without the need for mechanically moving parts. However, a small amount of detector or scatter noise could affect the accuracy of wavefront sensing. Here, a novel method of wavefront reconstruction incorporating a virtual Hartmann-Shack (HS) wavefront sensor is proposed that allows easy tuning of several wavefront sensor parameters. The proposed method was tested and compared with a Fourier unwrapping method implemented on a digital PS PDI. The rewrapping of the Fourier reconstructed wavefronts resulted in phase maps that matched well the original wrapped phase and the performance was found to be more stable and accurate than conventional methods. Through simulation studies, the superiority of the proposed virtual HS phase unwrapping method is shown in comparison with the Fourier unwrapping method in the presence of noise. Further, combining the two methods could improve accuracy when the signal-to-noise ratio is sufficiently high.
A Versatile High Speed 250 MHz Pulse Imager for Biomedical Applications
Epel, Boris; Sundramoorthy, Subramanian V.; Mailer, Colin; Halpern, Howard J.
2009-01-01
A versatile 250 MHz pulse electron paramagnetic resonance (EPR) instrument for imaging of small animals is presented. Flexible design of the imager hardware and software makes it possible to use virtually any pulse EPR imaging modality. A fast pulse generation and data acquisition system based on general purpose PCI boards performs measurements with minimal additional delays. Careful design of receiver protection circuitry allowed us to achieve very high sensitivity of the instrument. In this article we demonstrate the ability of the instrument to obtain three dimensional images using the electron spin echo (ESE) and single point imaging (SPI) methods. In a phantom that contains a 1 mM solution of narrow line (16 μT, peak-to-peak) paramagnetic spin probe we achieved an acquisition time of 32 seconds per image with a fast 3D ESE imaging protocol. Using an 18 minute 3D phase relaxation (T2e) ESE imaging protocol in a homogeneous sample a spatial resolution of 1.4 mm and a standard deviation of T2e of 8.5% were achieved. When applied to in vivo imaging this precision of T2e determination would be equivalent to 2 torr resolution of oxygen partial pressure in animal tissues. PMID:19924261
Virtual reality in rhinology-a new dimension of clinical experience.
Klapan, Ivica; Raos, Pero; Galeta, Tomislav; Kubat, Goranka
2016-07-01
There is often a need to more precisely identify the extent of pathology and the fine elements of intracranial anatomic features during the diagnostic process and during many operations in the nose, sinus, orbit, and skull base region. In two case reports, we describe the methods used in the diagnostic workup and surgical therapy in the nose and paranasal sinus region. Besides baseline x-ray, multislice computed tomography, and magnetic resonance imaging, operative field imaging was performed via a rapid prototyping model, virtual endoscopy, and 3-D imaging. Different head tissues were visualized in different colors, showing their anatomic interrelations and the extent of pathologic tissue within the operative field. This approach has not yet been used as a standard preoperative or intraoperative procedure in otorhinolaryngology. In this way, we tried to understand the new, visualized "world of anatomic relations within the patient's head" by creating an impression of perception (virtual perception) of the given position of all elements in a particular anatomic region of the head, which does not exist in the real world (virtual world). This approach was aimed at upgrading the diagnostic workup and surgical therapy by ensuring a faster, safer and, above all, simpler operative procedure. In conclusion, any ENT specialist can provide virtual reality support in implementing surgical procedures, with additional control of risks and within the limits of normal tissue, without additional trauma to the surrounding tissue in the anatomic region. At the same time, the virtual reality support provides an impression of the virtual world as the specialist navigates through it and manipulates virtual objects.
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló
2018-04-01
The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.
Van Herzeele, Isabelle; O'Donoghue, Kevin G L; Aggarwal, Rajesh; Vermassen, Frank; Darzi, Ara; Cheshire, Nicholas J W
2010-04-01
This study evaluated virtual reality (VR) simulation for endovascular training of medical students to determine whether innate perceptual, visuospatial, and psychomotor aptitude (VSA) can predict initial and plateau phase of technical endovascular skills acquisition. Twenty medical students received didactic and endovascular training on a commercially available VR simulator. Each student treated a series of 10 identical noncomplex renal artery stenoses endovascularly. The simulator recorded performance data instantly and objectively. An experienced interventionalist rated the performance at the initial and final sessions using generic (out of 40) and procedure-specific (out of 30) rating scales. VSA were tested with fine motor dexterity (FMD, Perdue Pegboard), psychomotor ability (minimally invasive virtual reality surgical trainer [MIST-VR]), image recall (Rey-Osterrieth), and organizational aptitude (map-planning). VSA performance scores were correlated with the assessment parameters of endovascular skills at commencement and completion of training. Medical students exhibited statistically significant learning curves from the initial to the plateau performance for contrast usage (medians, 28 vs 17 mL, P < .001), total procedure time (2120 vs 867 seconds, P < .001), and fluoroscopy time (993 vs. 507 seconds, P < .001). Scores on generic and procedure-specific rating scales improved significantly (10 vs 25, P < .001; 8 vs 17 P < .001). Significant correlations were noted for FMD with initial and plateau sessions for fluoroscopy time (r(s) = -0.564, P = .010; r(s) = -.449, P = .047). FMD correlated with procedure-specific scores at the initial session (r(s) = .607, P = .006). Image recall correlated with generic skills at the end of training (r(s) = .587, P = .006). Simulator-based training in endovascular skills improved performance in medical students. There were significant correlations between initial endovascular skill and fine motor dexterity as well as with image recall at end of the training period. In addition to current recruitment strategies, VSA may be a useful tool for predictive validity studies.
ERIC Educational Resources Information Center
Yang, Mau-Tsuen; Liao, Wan-Che
2014-01-01
The physical-virtual immersion and real-time interaction play an essential role in cultural and language learning. Augmented reality (AR) technology can be used to seamlessly merge virtual objects with real-world images to realize immersions. Additionally, computer vision (CV) technology can recognize free-hand gestures from live images to enable…
NASA Astrophysics Data System (ADS)
Feodorova, Valentina A.; Saltykov, Yury V.; Zaytsev, Sergey S.; Ulyanov, Sergey S.; Ulianova, Onega V.
2018-04-01
Method of phase-shifting speckle-interferometry has been used as a new tool with high potency for modern bioinformatics. Virtual phase-shifting speckle-interferometry has been applied for detection of polymorphism in the of Chlamydia trachomatis omp1 gene. It has been shown, that suggested method is very sensitive to natural genetic mutations as single nucleotide polymorphism (SNP). Effectiveness of proposed method has been compared with effectiveness of the newest bioinformatic tools, based on nucleotide sequence alignment.
Virtual Reference, Real Money: Modeling Costs in Virtual Reference Services
ERIC Educational Resources Information Center
Eakin, Lori; Pomerantz, Jeffrey
2009-01-01
Libraries nationwide are in yet another phase of belt tightening. Without an understanding of the economic factors that influence library operations, however, controlling costs and performing cost-benefit analyses on services is difficult. This paper describes a project to develop a cost model for collaborative virtual reference services. This…
Theoretical Bases for Using Virtual Reality in Education
ERIC Educational Resources Information Center
Chen, Chwen Jen
2009-01-01
This article elaborates on how the technical capabilities of virtual reality support the constructivist learning principles. It introduces VRID, a model for instructional design and development that offers explicit guidance on how to produce an educational virtual environment. The define phase of VRID consists of three main tasks: forming a…
Kong, Seong-Ho; Haouchine, Nazim; Soares, Renato; Klymchenko, Andrey; Andreiuk, Bohdan; Marques, Bruno; Shabat, Galyna; Piechaud, Thierry; Diana, Michele; Cotin, Stéphane; Marescaux, Jacques
2017-07-01
Augmented reality (AR) is the fusion of computer-generated and real-time images. AR can be used in surgery as a navigation tool, by creating a patient-specific virtual model through 3D software manipulation of DICOM imaging (e.g., CT scan). The virtual model can be superimposed to real-time images enabling transparency visualization of internal anatomy and accurate localization of tumors. However, the 3D model is rigid and does not take into account inner structures' deformations. We present a concept of automated AR registration, while the organs undergo deformation during surgical manipulation, based on finite element modeling (FEM) coupled with optical imaging of fluorescent surface fiducials. Two 10 × 1 mm wires (pseudo-tumors) and six 10 × 0.9 mm fluorescent fiducials were placed in ex vivo porcine kidneys (n = 10). Biomechanical FEM-based models were generated from CT scan. Kidneys were deformed and the shape changes were identified by tracking the fiducials, using a near-infrared optical system. The changes were registered automatically with the virtual model, which was deformed accordingly. Accuracy of prediction of pseudo-tumors' location was evaluated with a CT scan in the deformed status (ground truth). In vivo: fluorescent fiducials were inserted under ultrasound guidance in the kidney of one pig, followed by a CT scan. The FEM-based virtual model was superimposed on laparoscopic images by automatic registration of the fiducials. Biomechanical models were successfully generated and accurately superimposed on optical images. The mean measured distance between the estimated tumor by biomechanical propagation and the scanned tumor (ground truth) was 0.84 ± 0.42 mm. All fiducials were successfully placed in in vivo kidney and well visualized in near-infrared mode enabling accurate automatic registration of the virtual model on the laparoscopic images. Our preliminary experiments showed the potential of a biomechanical model with fluorescent fiducials to propagate the deformation of solid organs' surface to their inner structures including tumors with good accuracy and automatized robust tracking.
Heuts, Samuel; Maessen, Jos G.
2016-01-01
For the past decades, surgeries have become more complex, due to the increasing age of the patient population referred for thoracic surgery, more complex pathology and the emergence of minimally invasive thoracic surgery. Together with the early detection of thoracic disease as a result of innovations in diagnostic possibilities and the paradigm shift to personalized medicine, preoperative planning is becoming an indispensable and crucial aspect of surgery. Several new techniques facilitating this paradigm shift have emerged. Pre-operative marking and staining of lesions are already a widely accepted method of preoperative planning in thoracic surgery. However, three-dimensional (3D) image reconstructions, virtual simulation and rapid prototyping (RP) are still in development phase. These new techniques are expected to become an important part of the standard work-up of patients undergoing thoracic surgery in the future. This review aims at graphically presenting and summarizing these new diagnostic and therapeutic tools PMID:29078505
Spatially and spectrally engineered spin-orbit interaction for achromatic virtual shaping
Pu, Mingbo; Zhao, Zeyu; Wang, Yanqin; Li, Xiong; Ma, Xiaoliang; Hu, Chenggang; Wang, Changtao; Huang, Cheng; Luo, Xiangang
2015-01-01
The geometries of objects are deterministic in electromagnetic phenomena in all aspects of our world, ranging from imaging with spherical eyes to stealth aircraft with bizarre shapes. Nevertheless, shaping the physical geometry is often undesired owing to other physical constraints such as aero- and hydro-dynamics in the stealth technology. Here we demonstrate that it is possible to change the traditional law of reflection as well as the electromagnetic characters without altering the physical shape, by utilizing the achromatic phase shift stemming from spin-orbit interaction in ultrathin space-variant and spectrally engineered metasurfaces. The proposal is validated by full-wave simulations and experimental characterization in optical wavelengths ranging from 600 nm to 2800 nm and microwave frequencies in 8-16 GHz, with echo reflectance less than 10% in the whole range. The virtual shaping as well as the revised law of reflection may serve as a versatile tool in many realms, including broadband and conformal camouflage and Kinoform holography, to name just a few. PMID:25959663
Anil, S M; Kato, Y; Hayakawa, M; Yoshida, K; Nagahisha, S; Kanno, T
2007-04-01
Advances in computer imaging and technology have facilitated enhancement in surgical planning with a 3-dimensional model of the surgical plan of action utilizing advanced visualization tools in order to plan individual interactive operations with the aid of the dextroscope. This provides a proper 3-dimensional imaging insight to the pathological anatomy and sets a new dimension in collaboration for training and education. The case of a seventeen-year-old female, being operated with the aid of a preoperative 3-dimensional virtual reality planning and the practical application of the neurosurgical operation, is presented. This young lady presented with a two-year history of recurrent episodes of severe, global, throbbing headache with episodes of projectile vomiting associated with shoulder pain which progressively worsened. She had no obvious neurological deficits on clinical examination. CT and MRI showed a contrast-enhancing midline posterior fossa space-occupying lesion. Utilizing virtual imaging technology with the aid of a dextroscope which generates stereoscopic images, a 3-dimensional image was produced with the CT and MRI images. A preoperative planning for excision of the lesion was made and a real-time 3-dimensional volume was produced and surgical planning with the dextroscope was made and the lesion excised. Virtual reality has brought new proportions in 3-dimensional planning and management of various complex neuroanatomical problems that are faced during various operations. Integration of 3-dimensional imaging with stereoscopic vision makes understanding the complex anatomy easier and helps improve decision making in patient management.
Dual-Energy CT: New Horizon in Medical Imaging
Goo, Jin Mo
2017-01-01
Dual-energy CT has remained underutilized over the past decade probably due to a cumbersome workflow issue and current technical limitations. Clinical radiologists should be made aware of the potential clinical benefits of dual-energy CT over single-energy CT. To accomplish this aim, the basic principle, current acquisition methods with advantages and disadvantages, and various material-specific imaging methods as clinical applications of dual-energy CT should be addressed in detail. Current dual-energy CT acquisition methods include dual tubes with or without beam filtration, rapid voltage switching, dual-layer detector, split filter technique, and sequential scanning. Dual-energy material-specific imaging methods include virtual monoenergetic or monochromatic imaging, effective atomic number map, virtual non-contrast or unenhanced imaging, virtual non-calcium imaging, iodine map, inhaled xenon map, uric acid imaging, automatic bone removal, and lung vessels analysis. In this review, we focus on dual-energy CT imaging including related issues of radiation exposure to patients, scanning and post-processing options, and potential clinical benefits mainly to improve the understanding of clinical radiologists and thus, expand the clinical use of dual-energy CT; in addition, we briefly describe the current technical limitations of dual-energy CT and the current developments of photon-counting detector. PMID:28670151
ERIC Educational Resources Information Center
Gutierrez-Santiuste, Elba; Gallego-Arrufat, Maria-Jesus
2015-01-01
This study investigates the phases of development of synchronous and asynchronous virtual communication produced in a community of inquiry (CoI) by analyzing the internal structure of each intervention in the forum and each chat session to determine the evolution of their social, cognitive and teaching character. It also analyzes the participating…
Immersive virtual reality for visualization of abdominal CT
NASA Astrophysics Data System (ADS)
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A.; Bodenheimer, Robert E.
2013-03-01
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Immersive Virtual Reality for Visualization of Abdominal CT.
Lin, Qiufeng; Xu, Zhoubing; Li, Bo; Baucom, Rebeccah; Poulose, Benjamin; Landman, Bennett A; Bodenheimer, Robert E
2013-03-28
Immersive virtual environments use a stereoscopic head-mounted display and data glove to create high fidelity virtual experiences in which users can interact with three-dimensional models and perceive relationships at their true scale. This stands in stark contrast to traditional PACS-based infrastructure in which images are viewed as stacks of two-dimensional slices, or, at best, disembodied renderings. Although there has substantial innovation in immersive virtual environments for entertainment and consumer media, these technologies have not been widely applied in clinical applications. Here, we consider potential applications of immersive virtual environments for ventral hernia patients with abdominal computed tomography imaging data. Nearly a half million ventral hernias occur in the United States each year, and hernia repair is the most commonly performed general surgery operation worldwide. A significant problem in these conditions is communicating the urgency, degree of severity, and impact of a hernia (and potential repair) on patient quality of life. Hernias are defined by ruptures in the abdominal wall (i.e., the absence of healthy tissues) rather than a growth (e.g., cancer); therefore, understanding a hernia necessitates understanding the entire abdomen. Our environment allows surgeons and patients to view body scans at scale and interact with these virtual models using a data glove. This visualization and interaction allows users to perceive the relationship between physical structures and medical imaging data. The system provides close integration of PACS-based CT data with immersive virtual environments and creates opportunities to study and optimize interfaces for patient communication, operative planning, and medical education.
Statistical virtual eye model based on wavefront aberration
Wang, Jie-Mei; Liu, Chun-Ling; Luo, Yi-Ning; Liu, Yi-Guang; Hu, Bing-Jie
2012-01-01
Wavefront aberration affects the quality of retinal image directly. This paper reviews the representation and reconstruction of wavefront aberration, as well as the construction of virtual eye model based on Zernike polynomial coefficients. In addition, the promising prospect of virtual eye model is emphasized. PMID:23173112
Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars
2017-10-01
3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.
SPECT System Optimization Against A Discrete Parameter Space
Meng, L. J.; Li, N.
2013-01-01
In this paper, we present an analytical approach for optimizing the design of a static SPECT system or optimizing the sampling strategy with a variable/adaptive SPECT imaging hardware against an arbitrarily given set of system parameters. This approach has three key aspects. First, it is designed to operate over a discretized system parameter space. Second, we have introduced an artificial concept of virtual detector as the basic building block of an imaging system. With a SPECT system described as a collection of the virtual detectors, one can convert the task of system optimization into a process of finding the optimum imaging time distribution (ITD) across all virtual detectors. Thirdly, the optimization problem (finding the optimum ITD) could be solved with a block-iterative approach or other non-linear optimization algorithms. In essence, the resultant optimum ITD could provide a quantitative measure of the relative importance (or effectiveness) of the virtual detectors and help to identify the system configuration or sampling strategy that leads to an optimum imaging performance. Although we are using SPECT imaging as a platform to demonstrate the system optimization strategy, this development also provides a useful framework for system optimization problems in other modalities, such as positron emission tomography (PET) and X-ray computed tomography (CT) [1, 2]. PMID:23587609
[The virtual university in medicine. Context, concepts, specifications, users' manual].
Duvauferrier, R; Séka, L P; Rolland, Y; Rambeau, M; Le Beux, P; Morcet, N
1998-09-01
The widespread use of Web servers, with the emergence of interactive functions and the possibility of credit card payment via Internet, together with the requirement for continuing education and the subsequent need for a computer to link into the health care network have incited the development of a virtual university scheme on Internet. The Virtual University of Radiology is not only a computer-assisted teaching tool with a set of attractive features, but also a powerful engine allowing the organization, distribution and control of medical knowledge available in the www.server. The scheme provides patient access to general information, a secretary's office for enrollment and the Virtual University itself, with its library, image database, a forum for subspecialties and clinical case reports, an evaluation module and various guides and help tools for diagnosis, prescription and indexing. Currently the Virtual University of Radiology offers diagnostic imaging, but can also be used by other specialties and for general practice.
Two-photon calcium imaging during fictive navigation in virtual environments
Ahrens, Misha B.; Huang, Kuo Hua; Narayan, Sujatha; Mensh, Brett D.; Engert, Florian
2013-01-01
A full understanding of nervous system function requires recording from large populations of neurons during naturalistic behaviors. Here we enable paralyzed larval zebrafish to fictively navigate two-dimensional virtual environments while we record optically from many neurons with two-photon imaging. Electrical recordings from motor nerves in the tail are decoded into intended forward swims and turns, which are used to update a virtual environment displayed underneath the fish. Several behavioral features—such as turning responses to whole-field motion and dark avoidance—are well-replicated in this virtual setting. We readily observed neuronal populations in the hindbrain with laterally selective responses that correlated with right or left optomotor behavior. We also observed neurons in the habenula, pallium, and midbrain with response properties specific to environmental features. Beyond single-cell correlations, the classification of network activity in such virtual settings promises to reveal principles of brainwide neural dynamics during behavior. PMID:23761738
Two-photon calcium imaging during fictive navigation in virtual environments.
Ahrens, Misha B; Huang, Kuo Hua; Narayan, Sujatha; Mensh, Brett D; Engert, Florian
2013-01-01
A full understanding of nervous system function requires recording from large populations of neurons during naturalistic behaviors. Here we enable paralyzed larval zebrafish to fictively navigate two-dimensional virtual environments while we record optically from many neurons with two-photon imaging. Electrical recordings from motor nerves in the tail are decoded into intended forward swims and turns, which are used to update a virtual environment displayed underneath the fish. Several behavioral features-such as turning responses to whole-field motion and dark avoidance-are well-replicated in this virtual setting. We readily observed neuronal populations in the hindbrain with laterally selective responses that correlated with right or left optomotor behavior. We also observed neurons in the habenula, pallium, and midbrain with response properties specific to environmental features. Beyond single-cell correlations, the classification of network activity in such virtual settings promises to reveal principles of brainwide neural dynamics during behavior.
Gravity influences top-down signals in visual processing.
Cheron, Guy; Leroy, Axelle; Palmero-Soler, Ernesto; De Saedeleer, Caty; Bengoetxea, Ana; Cebolla, Ana-Maria; Vidal, Manuel; Dan, Bernard; Berthoz, Alain; McIntyre, Joseph
2014-01-01
Visual perception is not only based on incoming visual signals but also on information about a multimodal reference frame that incorporates vestibulo-proprioceptive input and motor signals. In addition, top-down modulation of visual processing has previously been demonstrated during cognitive operations including selective attention and working memory tasks. In the absence of a stable gravitational reference, the updating of salient stimuli becomes crucial for successful visuo-spatial behavior by humans in weightlessness. Here we found that visually-evoked potentials triggered by the image of a tunnel just prior to an impending 3D movement in a virtual navigation task were altered in weightlessness aboard the International Space Station, while those evoked by a classical 2D-checkerboard were not. Specifically, the analysis of event-related spectral perturbations and inter-trial phase coherency of these EEG signals recorded in the frontal and occipital areas showed that phase-locking of theta-alpha oscillations was suppressed in weightlessness, but only for the 3D tunnel image. Moreover, analysis of the phase of the coherency demonstrated the existence on Earth of a directional flux in the EEG signals from the frontal to the occipital areas mediating a top-down modulation during the presentation of the image of the 3D tunnel. In weightlessness, this fronto-occipital, top-down control was transformed into a diverging flux from the central areas toward the frontal and occipital areas. These results demonstrate that gravity-related sensory inputs modulate primary visual areas depending on the affordances of the visual scene.
Construction of a Virtual Scanning Electron Microscope (VSEM)
NASA Technical Reports Server (NTRS)
Fried, Glenn; Grosser, Benjamin
2004-01-01
The Imaging Technology Group (ITG) proposed to develop a Virtual SEM (VSEM) application and supporting materials as the first installed instrument in NASA s Virtual Laboratory Project. The instrument was to be a simulator modeled after an existing SEM, and was to mimic that real instrument as closely as possible. Virtual samples would be developed and provided along with the instrument, which would be written in Java.
The virtual mirror: a new interaction paradigm for augmented reality environments.
Bichlmeier, Christoph; Heining, Sandro Michael; Feuerstein, Marco; Navab, Nassir
2009-09-01
Medical augmented reality (AR) has been widely discussed within the medical imaging as well as computer aided surgery communities. Different systems for exemplary medical applications have been proposed. Some of them produced promising results. One major issue still hindering AR technology to be regularly used in medical applications is the interaction between physician and the superimposed 3-D virtual data. Classical interaction paradigms, for instance with keyboard and mouse, to interact with visualized medical 3-D imaging data are not adequate for an AR environment. This paper introduces the concept of a tangible/controllable Virtual Mirror for medical AR applications. This concept intuitively augments the direct view of the surgeon with all desired views on volumetric medical imaging data registered with the operation site without moving around the operating table or displacing the patient. We selected two medical procedures to demonstrate and evaluate the potentials of the Virtual Mirror for the surgical workflow. Results confirm the intuitiveness of this new paradigm and its perceptive advantages for AR-based computer aided interventions.
Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.
Villarrubia, J S; Tondare, V N; Vladár, A E
2016-01-01
The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.
Working memory in wayfinding-a dual task experiment in a virtual city.
Meilinger, Tobias; Knauff, Markus; Bülthoff, Heinrich H
2008-06-01
This study examines the working memory systems involved in human wayfinding. In the learning phase, 24 participants learned two routes in a novel photorealistic virtual environment displayed on a 220° screen while they were disrupted by a visual, a spatial, a verbal, or-in a control group-no secondary task. In the following wayfinding phase, the participants had to find and to "virtually walk" the two routes again. During this wayfinding phase, a number of dependent measures were recorded. This research shows that encoding wayfinding knowledge interfered with the verbal and with the spatial secondary task. These interferences were even stronger than the interference of wayfinding knowledge with the visual secondary task. These findings are consistent with a dual-coding approach of wayfinding knowledge. 2008 Cognitive Science Society, Inc.
Evanescent excitation and emission in fluorescence microscopy.
Axelrod, Daniel
2013-04-02
Evanescent light-light that does not propagate but instead decays in intensity over a subwavelength distance-appears in both excitation (as in total internal reflection) and emission (as in near-field imaging) forms in fluorescence microscopy. This review describes the physical connection between these two forms as a consequence of geometrical squeezing of wavefronts, and describes newly established or speculative applications and combinations of the two. In particular, each can be used in analogous ways to produce surface-selective images, to examine the thickness and refractive index of films (such as lipid multilayers or protein layers) on solid supports, and to measure the absolute distance of a fluorophore to a surface. In combination, the two forms can further increase selectivity and reduce background scattering in surface images. The polarization properties of each lead to more sensitive and accurate measures of fluorophore orientation and membrane micromorphology. The phase properties of the evanescent excitation lead to a method of creating a submicroscopic area of total internal reflection illumination or enhanced-resolution structured illumination. Analogously, the phase properties of evanescent emission lead to a method of producing a smaller point spread function, in a technique called virtual supercritical angle fluorescence. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Niu, Qiang; Chi, Xiaoyi; Leu, Ming C; Ochoa, Jorge
2008-01-01
This paper describes image processing, geometric modeling and data management techniques for the development of a virtual bone surgery system. Image segmentation is used to divide CT scan data into different segments representing various regions of the bone. A region-growing algorithm is used to extract cortical bone and trabecular bone structures systematically and efficiently. Volume modeling is then used to represent the bone geometry based on the CT scan data. Material removal simulation is achieved by continuously performing Boolean subtraction of the surgical tool model from the bone model. A quadtree-based adaptive subdivision technique is developed to handle the large set of data in order to achieve the real-time simulation and visualization required for virtual bone surgery. A Marching Cubes algorithm is used to generate polygonal faces from the volumetric data. Rendering of the generated polygons is performed with the publicly available VTK (Visualization Tool Kit) software. Implementation of the developed techniques consists of developing a virtual bone-drilling software program, which allows the user to manipulate a virtual drill to make holes with the use of a PHANToM device on a bone model derived from real CT scan data.
Liu, Lei
2017-01-01
Virtual reality has great potential in training road safety skills to individuals with low vision but the feasibility of such training has not been demonstrated. We tested the hypotheses that low vision individuals could learn useful skills in virtual streets and could apply them to improve real street safety. Twelve participants, whose vision was too poor to use the pedestrian signals were taught by a certified orientation and mobility specialist to determine the safest time to cross the street using the visual and auditory signals made by the start of previously stopped cars at a traffic-light controlled street intersection. Four participants were trained in real streets and eight in virtual streets presented on 3 projection screens. The crossing timing of all participants was evaluated in real streets before and after training. The participants were instructed to say “GO” at the time when they felt the safest to cross the street. A safety score was derived to quantify the GO calls based on its occurrence in the pedestrian phase (when the pedestrian sign did not show DON’T WALK). Before training, > 50% of the GO calls from all participants fell in the DON’T WALK phase of the traffic cycle and thus were totally unsafe. 20% of the GO calls fell in the latter half of the pedestrian phase. These calls were unsafe because one initiated crossing this late might not have sufficient time to walk across the street. After training, 90% of the GO calls fell in the early half of the pedestrian phase. These calls were safer because one initiated crossing in the pedestrian phase and had at least half of the pedestrian phase for walking across. Similar safety changes occurred in both virtual street and real street trained participants. An ANOVA showed a significant increase of the safety scores after training and there was no difference in this safety improvement between the virtual street and real street trained participants. This study demonstrated that virtual reality-based orientation and mobility training could be as efficient as real street training in improving street safety in individuals with severely impaired vision. PMID:28445540
Real-Time Classification of Hand Motions Using Ultrasound Imaging of Forearm Muscles.
Akhlaghi, Nima; Baker, Clayton A; Lahlou, Mohamed; Zafar, Hozaifah; Murthy, Karthik G; Rangwala, Huzefa S; Kosecka, Jana; Joiner, Wilsaan M; Pancrazio, Joseph J; Sikdar, Siddhartha
2016-08-01
Surface electromyography (sEMG) has been the predominant method for sensing electrical activity for a number of applications involving muscle-computer interfaces, including myoelectric control of prostheses and rehabilitation robots. Ultrasound imaging for sensing mechanical deformation of functional muscle compartments can overcome several limitations of sEMG, including the inability to differentiate between deep contiguous muscle compartments, low signal-to-noise ratio, and lack of a robust graded signal. The objective of this study was to evaluate the feasibility of real-time graded control using a computationally efficient method to differentiate between complex hand motions based on ultrasound imaging of forearm muscles. Dynamic ultrasound images of the forearm muscles were obtained from six able-bodied volunteers and analyzed to map muscle activity based on the deformation of the contracting muscles during different hand motions. Each participant performed 15 different hand motions, including digit flexion, different grips (i.e., power grasp and pinch grip), and grips in combination with wrist pronation. During the training phase, we generated a database of activity patterns corresponding to different hand motions for each participant. During the testing phase, novel activity patterns were classified using a nearest neighbor classification algorithm based on that database. The average classification accuracy was 91%. Real-time image-based control of a virtual hand showed an average classification accuracy of 92%. Our results demonstrate the feasibility of using ultrasound imaging as a robust muscle-computer interface. Potential clinical applications include control of multiarticulated prosthetic hands, stroke rehabilitation, and fundamental investigations of motor control and biomechanics.
Tondare, Vipin N; Villarrubia, John S; Vlada R, András E
2017-10-01
Three-dimensional (3D) reconstruction of a sample surface from scanning electron microscope (SEM) images taken at two perspectives has been known for decades. Nowadays, there exist several commercially available stereophotogrammetry software packages. For testing these software packages, in this study we used Monte Carlo simulated SEM images of virtual samples. A virtual sample is a model in a computer, and its true dimensions are known exactly, which is impossible for real SEM samples due to measurement uncertainty. The simulated SEM images can be used for algorithm testing, development, and validation. We tested two stereophotogrammetry software packages and compared their reconstructed 3D models with the known geometry of the virtual samples used to create the simulated SEM images. Both packages performed relatively well with simulated SEM images of a sample with a rough surface. However, in a sample containing nearly uniform and therefore low-contrast zones, the height reconstruction error was ≈46%. The present stereophotogrammetry software packages need further improvement before they can be used reliably with SEM images with uniform zones.
Virtual microscopy in virtual tumor banking.
Isabelle, M; Teodorovic, I; Oosterhuis, J W; Riegman, P H J; Passioukov, A; Lejeune, S; Therasse, P; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; Van Damme, B; Van de Vijver, M; Van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; López-Guerrero, J A; Llombart-Bosch, A; Carbone, A; Gloghini, A; Van Veen, E B
2006-01-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated virtual microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting biorepositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
TuBaFrost 6: virtual microscopy in virtual tumour banking.
Teodorovic, I; Isabelle, M; Carbone, A; Passioukov, A; Lejeune, S; Jaminé, D; Therasse, P; Gloghini, A; Dinjens, W N M; Lam, K H; Oomen, M H A; Spatz, A; Ratcliffe, C; Knox, K; Mager, R; Kerr, D; Pezzella, F; van Damme, B; van de Vijver, M; van Boven, H; Morente, M M; Alonso, S; Kerjaschki, D; Pammer, J; Lopez-Guerrero, J A; Llombart Bosch, A; van Veen, E-B; Oosterhuis, J W; Riegman, P H J
2006-12-01
Many systems have already been designed and successfully used for sharing histology images over large distances, without transfer of the original glass slides. Rapid evolution was seen when digital images could be transferred over the Internet. Nowadays, sophisticated Virtual Microscope systems can be acquired, with the capability to quickly scan large batches of glass slides at high magnification and compress and store the large images on disc, which subsequently can be consulted through the Internet. The images are stored on an image server, which can give simple, easy to transfer pictures to the user specifying a certain magnification on any position in the scan. This offers new opportunities in histology review, overcoming the necessity of the dynamic telepathology systems to have compatible software systems and microscopes and in addition, an adequate connection of sufficient bandwidth. Consulting the images now only requires an Internet connection and a computer with a high quality monitor. A system of complete pathology review supporting bio-repositories is described, based on the implementation of this technique in the European Human Frozen Tumor Tissue Bank (TuBaFrost).
1975-09-01
mass diffusion in the immediate region 13 wmmmm mm/mmn*****^^1 «•PIII^ BPP of the combustion zone remain major points of disagreement for many...setup (S2-2f2a S3 = 2f3 ) virtual image I • (9/2 f3 - i/2f2 ) — Fig.12 Virtual image setup(S2 =0.5 f2 aS3 = 3f3) 38 h v. / V image f + obje
Software for simulation of a computed tomography imaging spectrometer using optical design software
NASA Astrophysics Data System (ADS)
Spuhler, Peter T.; Willer, Mark R.; Volin, Curtis E.; Descour, Michael R.; Dereniak, Eustace L.
2000-11-01
Our Imaging Spectrometer Simulation Software known under the name Eikon should improve and speed up the design of a Computed Tomography Imaging Spectrometer (CTIS). Eikon uses existing raytracing software to simulate a virtual instrument. Eikon enables designers to virtually run through the design, calibration and data acquisition, saving significant cost and time when designing an instrument. We anticipate that Eikon simulations will improve future designs of CTIS by allowing engineers to explore more instrument options.
Mobile viewer system for virtual 3D space using infrared LED point markers and camera
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-09-01
The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.
Designing a Virtual Item Bank Based on the Techniques of Image Processing
ERIC Educational Resources Information Center
Liao, Wen-Wei; Ho, Rong-Guey
2011-01-01
One of the major weaknesses of the item exposure rates of figural items in Intelligence Quotient (IQ) tests lies in its inaccuracies. In this study, a new approach is proposed and a useful test tool known as the Virtual Item Bank (VIB) is introduced. The VIB combine Automatic Item Generation theory and image processing theory with the concepts of…
Clemente, Miriam; Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar
2014-06-27
To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals' phobia. The objective of our study was to evaluate the brain activations associated with small animals' phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives.
NASA Astrophysics Data System (ADS)
Berzano, D.; Blomer, J.; Buncic, P.; Charalampidis, I.; Ganis, G.; Meusel, R.
2015-12-01
Cloud resources nowadays contribute an essential share of resources for computing in high-energy physics. Such resources can be either provided by private or public IaaS clouds (e.g. OpenStack, Amazon EC2, Google Compute Engine) or by volunteers computers (e.g. LHC@Home 2.0). In any case, experiments need to prepare a virtual machine image that provides the execution environment for the physics application at hand. The CernVM virtual machine since version 3 is a minimal and versatile virtual machine image capable of booting different operating systems. The virtual machine image is less than 20 megabyte in size. The actual operating system is delivered on demand by the CernVM File System. CernVM 3 has matured from a prototype to a production environment. It is used, for instance, to run LHC applications in the cloud, to tune event generators using a network of volunteer computers, and as a container for the historic Scientific Linux 5 and Scientific Linux 4 based software environments in the course of long-term data preservation efforts of the ALICE, CMS, and ALEPH experiments. We present experience and lessons learned from the use of CernVM at scale. We also provide an outlook on the upcoming developments. These developments include adding support for Scientific Linux 7, the use of container virtualization, such as provided by Docker, and the streamlining of virtual machine contextualization towards the cloud-init industry standard.
NASA Astrophysics Data System (ADS)
Michaelis, A.; Nemani, R. R.; Wang, W.; Votava, P.; Hashimoto, H.
2010-12-01
Given the increasing complexity of climate modeling and analysis tools, it is often difficult and expensive to build or recreate an exact replica of the software compute environment used in past experiments. With the recent development of new technologies for hardware virtualization, an opportunity exists to create full modeling, analysis and compute environments that are “archiveable”, transferable and may be easily shared amongst a scientific community or presented to a bureaucratic body if the need arises. By encapsulating and entire modeling and analysis environment in a virtual machine image, others may quickly gain access to the fully built system used in past experiments, potentially easing the task and reducing the costs of reproducing and verify past results produced by other researchers. Moreover, these virtual machine images may be used as a pedagogical tool for others that are interested in performing an academic exercise but don't yet possess the broad expertise required. We built two virtual machine images, one with the Community Earth System Model (CESM) and one with Weather Research Forecast Model (WRF), then ran several small experiments to assess the feasibility, performance overheads costs, reusability, and transferability. We present a list of the pros and cons as well as lessoned learned from utilizing virtualization technology in the climate and earth systems modeling domain.
Knowledge Sharing and Creation in a Teachers' Professional Virtual Community
ERIC Educational Resources Information Center
Lin, Fu-ren; Lin, Sheng-cheng; Huang, Tzu-ping
2008-01-01
By virtue of the non-profit nature of school education, a professional virtual community composed of teachers provides precious data to understand the processes of knowledge sharing and creation. Guided by grounded theory, the authors conducted a three-phased study on a teachers' virtual community in order to understand the knowledge flows among…
Immersive Virtual Moon Scene System Based on Panoramic Camera Data of Chang'E-3
NASA Astrophysics Data System (ADS)
Gao, X.; Liu, J.; Mu, L.; Yan, W.; Zeng, X.; Zhang, X.; Li, C.
2014-12-01
The system "Immersive Virtual Moon Scene" is used to show the virtual environment of Moon surface in immersive environment. Utilizing stereo 360-degree imagery from panoramic camera of Yutu rover, the system enables the operator to visualize the terrain and the celestial background from the rover's point of view in 3D. To avoid image distortion, stereo 360-degree panorama stitched by 112 images is projected onto inside surface of sphere according to panorama orientation coordinates and camera parameters to build the virtual scene. Stars can be seen from the Moon at any time. So we render the sun, planets and stars according to time and rover's location based on Hipparcos catalogue as the background on the sphere. Immersing in the stereo virtual environment created by this imaged-based rendering technique, the operator can zoom, pan to interact with the virtual Moon scene and mark interesting objects. Hardware of the immersive virtual Moon system is made up of four high lumen projectors and a huge curve screen which is 31 meters long and 5.5 meters high. This system which take all panoramic camera data available and use it to create an immersive environment, enable operator to interact with the environment and mark interesting objects contributed heavily to establishment of science mission goals in Chang'E-3 mission. After Chang'E-3 mission, the lab with this system will be open to public. Besides this application, Moon terrain stereo animations based on Chang'E-1 and Chang'E-2 data will be showed to public on the huge screen in the lab. Based on the data of lunar exploration,we will made more immersive virtual moon scenes and animations to help the public understand more about the Moon in the future.
Chimeric Plastics : a new class of thermoplastic
NASA Astrophysics Data System (ADS)
Sonnenschein, Mark
A new class of thermoplastics (dubbed ``Chimerics'') is described that exhibits a high temperature glass transition followed by high performance elastomer properties, prior to melting. These transparent materials are comprised of co-continuous phase-separated block copolymers. One block is an amorphous glass with a high glass transition temperature, and the second is a higher temperature phase transition block creating virtual thermoreversible crosslinks. The material properties are highly influenced by phase separation on the order of 10-30 nanometers. At lower temperatures the polymer reflects the sum of the block copolymer properties. As the amorphous phase glass transition is exceeded, the virtual crosslinks of the higher temperature second phase dominate the plastic properties, resulting in rubber-like elasticity.
Combination of surface and borehole seismic data for robust target-oriented imaging
NASA Astrophysics Data System (ADS)
Liu, Yi; van der Neut, Joost; Arntsen, Børge; Wapenaar, Kees
2016-05-01
A novel application of seismic interferometry (SI) and Marchenko imaging using both surface and borehole data is presented. A series of redatuming schemes is proposed to combine both data sets for robust deep local imaging in the presence of velocity uncertainties. The redatuming schemes create a virtual acquisition geometry where both sources and receivers lie at the horizontal borehole level, thus only a local velocity model near the borehole is needed for imaging, and erroneous velocities in the shallow area have no effect on imaging around the borehole level. By joining the advantages of SI and Marchenko imaging, a macrovelocity model is no longer required and the proposed schemes use only single-component data. Furthermore, the schemes result in a set of virtual data that have fewer spurious events and internal multiples than previous virtual source redatuming methods. Two numerical examples are shown to illustrate the workflow and to demonstrate the benefits of the method. One is a synthetic model and the other is a realistic model of a field in the North Sea. In both tests, improved local images near the boreholes are obtained using the redatumed data without accurate velocities, because the redatumed data are close to the target.
Seung, Sungmin; Choi, Hongseok; Jang, Jongseong; Kim, Young Soo; Park, Jong-Oh; Park, Sukho; Ko, Seong Young
2017-01-01
This article presents a haptic-guided teleoperation for a tumor removal surgical robotic system, so-called a SIROMAN system. The system was developed in our previous work to make it possible to access tumor tissue, even those that seat deeply inside the brain, and to remove the tissue with full maneuverability. For a safe and accurate operation to remove only tumor tissue completely while minimizing damage to the normal tissue, a virtual wall-based haptic guidance together with a medical image-guided control is proposed and developed. The virtual wall is extracted from preoperative medical images, and the robot is controlled to restrict its motion within the virtual wall using haptic feedback. Coordinate transformation between sub-systems, a collision detection algorithm, and a haptic-guided teleoperation using a virtual wall are described in the context of using SIROMAN. A series of experiments using a simplified virtual wall are performed to evaluate the performance of virtual wall-based haptic-guided teleoperation. With haptic guidance, the accuracy of the robotic manipulator's trajectory is improved by 57% compared to one without. The tissue removal performance is also improved by 21% ( p < 0.05). The experiments show that virtual wall-based haptic guidance provides safer and more accurate tissue removal for single-port brain surgery.
NASA Technical Reports Server (NTRS)
Schnase, John L.; Tamkin, Glenn S.; Ripley, W. David III; Stong, Savannah; Gill, Roger; Duffy, Daniel Q.
2012-01-01
Scientific data services are becoming an important part of the NASA Center for Climate Simulation's mission. Our technological response to this expanding role is built around the concept of a Virtual Climate Data Server (vCDS), repetitive provisioning, image-based deployment and distribution, and virtualization-as-a-service. The vCDS is an iRODS-based data server specialized to the needs of a particular data-centric application. We use RPM scripts to build vCDS images in our local computing environment, our local Virtual Machine Environment, NASA s Nebula Cloud Services, and Amazon's Elastic Compute Cloud. Once provisioned into one or more of these virtualized resource classes, vCDSs can use iRODS s federation capabilities to create an integrated ecosystem of managed collections that is scalable and adaptable to changing resource requirements. This approach enables platform- or software-asa- service deployment of vCDS and allows the NCCS to offer virtualization-as-a-service: a capacity to respond in an agile way to new customer requests for data services.
Experimental demonstration of tri-aperture Differential Synthetic Aperture Ladar
NASA Astrophysics Data System (ADS)
Zhao, Zhilong; Huang, Jianyu; Wu, Shudong; Wang, Kunpeng; Bai, Tao; Dai, Ze; Kong, Xinyi; Wu, Jin
2017-04-01
A tri-aperture Differential Synthetic Aperture Ladar (DSAL) is demonstrated in laboratory, which is configured by using one common aperture to transmit the illuminating laser and another two along-track receiving apertures to collect back-scattered laser signal for optical heterodyne detection. The image formation theory on this tri-aperture DSAL shows that there are two possible methods to reconstruct the azimuth Phase History Data (PHD) for aperture synthesis by following standard DSAL principle, either method resulting in a different matched filter as well as an azimuth image resolution. The experimental setup of the tri-aperture DSAL adopts a frequency chirped laser of about 40 mW in 1550 nm wavelength range as the illuminating source and an optical isolator composed of a polarizing beam-splitter and a quarter wave plate to virtually line the three apertures in the along-track direction. Various DSAL images up to target distance of 12.9 m are demonstrated using both PHD reconstructing methods.
NASA Astrophysics Data System (ADS)
Qu, Junbo; Yan, Tie; Sun, Xiaofeng; Chen, Ye; Pan, Yi
2017-10-01
With the development of drilling technology to deeper stratum, overflowing especially gas cut occurs frequently, and then flow regime in wellbore annulus is from the original drilling fluid single-phase flow into gas & liquid two-phase flow. By using averaged two-fluid model equations and the basic principle of fluid mechanics to establish the continuity equations and momentum conservation equations of gas phase & liquid phase respectively. Relationship between pressure and density of gas & liquid was introduced to obtain hyperbolic equation, and get the expression of the dimensionless eigenvalue of the equation by using the characteristic line method, and analyze wellbore flow regime to get the critical gas content under different virtual mass force coefficients. Results show that the range of equation eigenvalues is getting smaller and smaller with the increase of gas content. When gas content reaches the critical point, the dimensionless eigenvalue of equation has no real solution, and the wellbore flow regime changed from bubble flow to bomb flow. When virtual mass force coefficients are 0.50, 0.60, 0.70 and 0.80 respectively, the critical gas contents are 0.32, 0.34, 0.37 and 0.39 respectively. The higher the coefficient of virtual mass force, the higher gas content in wellbore corresponding to the critical point of transition flow regime, which is in good agreement with previous experimental results. Therefore, it is possible to determine whether there is a real solution of the dimensionless eigenvalue of equation by virtual mass force coefficient and wellbore gas content, from which we can obtain the critical condition of wellbore flow regime transformation. It can provide theoretical support for the accurate judgment of the annular flow regime.
Simplified Virtualization in a HEP/NP Environment with Condor
NASA Astrophysics Data System (ADS)
Strecker-Kellogg, W.; Caramarcu, C.; Hollowell, C.; Wong, T.
2012-12-01
In this work we will address the development of a simple prototype virtualized worker node cluster, using Scientific Linux 6.x as a base OS, KVM and the libvirt API for virtualization, and the Condor batch software to manage virtual machines. The discussion in this paper provides details on our experience with building, configuring, and deploying the various components from bare metal, including the base OS, creation and distribution of the virtualized OS images and the integration of batch services with the virtual machines. Our focus was on simplicity and interoperability with our existing architecture.
Bongianni, Wayne L.
1984-01-01
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers.
Bongianni, W.L.
1984-04-17
A method and apparatus for electronically focusing and electronically scanning microscopic specimens are given. In the invention, visual images of even moving, living, opaque specimens can be acoustically obtained and viewed with virtually no time needed for processing (i.e., real time processing is used). And planar samples are not required. The specimens (if planar) need not be moved during scanning, although it will be desirable and possible to move or rotate nonplanar specimens (e.g., laser fusion targets) against the lens of the apparatus. No coupling fluid is needed, so specimens need not be wetted. A phase acoustic microscope is also made from the basic microscope components together with electronic mixers. 7 figs.
Virtual Ed. Faces Sharp Criticism
ERIC Educational Resources Information Center
Quillen, Ian
2011-01-01
It's been a rough time for the image of K-12 virtual education. Studies in Colorado and Minnesota have suggested that full-time online students are struggling to match the achievement levels of their peers in brick-and-mortar schools. Articles in "The New York Times" questioned not only the academic results for students in virtual schools, but…
Research on 3D virtual campus scene modeling based on 3ds Max and VRML
NASA Astrophysics Data System (ADS)
Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue
2015-12-01
With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.
Kin, Taichi; Nakatomi, Hirofumi; Shono, Naoyuki; Nomura, Seiji; Saito, Toki; Oyama, Hiroshi; Saito, Nobuhito
2017-10-15
Simulation and planning of surgery using a virtual reality model is becoming common with advances in computer technology. In this study, we conducted a literature search to find trends in virtual simulation of surgery for brain tumors. A MEDLINE search for "neurosurgery AND (simulation OR virtual reality)" retrieved a total of 1,298 articles published in the past 10 years. After eliminating studies designed solely for education and training purposes, 28 articles about the clinical application remained. The finding that the vast majority of the articles were about education and training rather than clinical applications suggests that several issues need be addressed for clinical application of surgical simulation. In addition, 10 of the 28 articles were from Japanese groups. In general, the 28 articles demonstrated clinical benefits of virtual surgical simulation. Simulation was particularly useful in better understanding complicated spatial relations of anatomical landmarks and in examining surgical approaches. In some studies, Virtual reality models were used on either surgical navigation system or augmented reality technology, which projects virtual reality images onto the operating field. Reported problems were difficulties in standardized, objective evaluation of surgical simulation systems; inability to respond to tissue deformation caused by surgical maneuvers; absence of the system functionality to reflect features of tissue (e.g., hardness and adhesion); and many problems with image processing. The amount of description about image processing tended to be insufficient, indicating that the level of evidence, risk of bias, precision, and reproducibility need to be addressed for further advances and ultimately for full clinical application.
Ferng, Alice S; Oliva, Isabel; Jokerst, Clinton; Avery, Ryan; Connell, Alana M; Tran, Phat L; Smith, Richard G; Khalpey, Zain
2017-08-01
Since the creation of SynCardia's 50 cc Total Artificial Hearts (TAHs), patients with irreversible biventricular failure now have two sizing options. Herein, a case series of three patients who have undergone successful 50 and 70 cc TAH implantation with complete closure of the chest cavity utilizing preoperative "virtual implantation" of different sized devices for surgical planning are presented. Computed tomography (CT) images were used for preoperative planning prior to TAH implantation. Three-dimensional (3D) reconstructions of preoperative chest CT images were generated and both 50 and 70 cc TAHs were virtually implanted into patients' thoracic cavities. During the simulation, the TAHs were projected over the native hearts in a similar position to the actual implantation, and the relationship between the devices and the atria, ventricles, chest wall, and diaphragm were assessed. The 3D reconstructed images and virtual modeling were used to simulate and determine for each patient if the 50 or 70 cc TAH would have a higher likelihood of successful implantation without complications. Subsequently, all three patients received clinical implants of the properly sized TAH based on virtual modeling, and their chest cavities were fully closed. This virtual implantation increases our confidence that the selected TAH will better fit within the thoracic cavity allowing for improved surgical outcome. Clinical implantation of the TAHs showed that our virtual modeling was an effective method for determining the correct fit and sizing of 50 and 70 cc TAHs. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Pérez Ramos, A.; Robleda Prieto, G.
2016-06-01
Indoor Gothic apse provides a complex environment for virtualization using imaging techniques due to its light conditions and architecture. Light entering throw large windows in combination with the apse shape makes difficult to find proper conditions to photo capture for reconstruction purposes. Thus, documentation techniques based on images are usually replaced by scanning techniques inside churches. Nevertheless, the need to use Terrestrial Laser Scanning (TLS) for indoor virtualization means a significant increase in the final surveying cost. So, in most cases, scanning techniques are used to generate dense point clouds. However, many Terrestrial Laser Scanner (TLS) internal cameras are not able to provide colour images or cannot reach the image quality that can be obtained using an external camera. Therefore, external quality images are often used to build high resolution textures of these models. This paper aims to solve the problem posted by virtualizing indoor Gothic churches, making that task more affordable using exclusively techniques base on images. It reviews a previous proposed methodology using a DSRL camera with 18-135 lens commonly used for close range photogrammetry and add another one using a HDR 360° camera with four lenses that makes the task easier and faster in comparison with the previous one. Fieldwork and office-work are simplified. The proposed methodology provides photographs in such a good conditions for building point clouds and textured meshes. Furthermore, the same imaging resources can be used to generate more deliverables without extra time consuming in the field, for instance, immersive virtual tours. In order to verify the usefulness of the method, it has been decided to apply it to the apse since it is considered one of the most complex elements of Gothic churches and it could be extended to the whole building.
Monocular display unit for 3D display with correct depth perception
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Hosomi, Takashi
2009-11-01
A study of virtual-reality system has been popular and its technology has been applied to medical engineering, educational engineering, a CAD/CAM system and so on. The 3D imaging display system has two types in the presentation method; one is a 3-D display system using a special glasses and the other is the monitor system requiring no special glasses. A liquid crystal display (LCD) recently comes into common use. It is possible for this display unit to provide the same size of displaying area as the image screen on the panel. A display system requiring no special glasses is useful for a 3D TV monitor, but this system has demerit such that the size of a monitor restricts the visual field for displaying images. Thus the conventional display can show only one screen, but it is impossible to enlarge the size of a screen, for example twice. To enlarge the display area, the authors have developed an enlarging method of display area using a mirror. Our extension method enables the observers to show the virtual image plane and to enlarge a screen area twice. In the developed display unit, we made use of an image separating technique using polarized glasses, a parallax barrier or a lenticular lens screen for 3D imaging. The mirror can generate the virtual image plane and it enlarges a screen area twice. Meanwhile the 3D display system using special glasses can also display virtual images over a wide area. In this paper, we present a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.
Sayseng, Vincent; Grondin, Julien; Konofagou, Elisa E
2018-05-01
Coherent compounding methods using the full or partial transmit aperture have been investigated as a possible means of increasing strain measurement accuracy in cardiac strain imaging; however, the optimal transmit parameters in either compounding approach have yet to be determined. The relationship between strain estimation accuracy and transmit parameters-specifically the subaperture, angular aperture, tilt angle, number of virtual sources, and frame rate-in partial aperture (subaperture compounding) and full aperture (steered compounding) fundamental mode cardiac imaging was thus investigated and compared. Field II simulation of a 3-D cylindrical annulus undergoing deformation and twist was developed to evaluate accuracy of 2-D strain estimation in cross-sectional views. The tradeoff between frame rate and number of virtual sources was then investigated via transthoracic imaging in the parasternal short-axis view of five healthy human subjects, using the strain filter to quantify estimation precision. Finally, the optimized subaperture compounding sequence (25-element subperture, 90° angular aperture, 10 virtual sources, 300-Hz frame rate) was compared to the optimized steered compounding sequence (60° angular aperture, 15° tilt, 10 virtual sources, 300-Hz frame rate) via transthoracic imaging of five healthy subjects. Both approaches were determined to estimate cumulative radial strain with statistically equivalent precision (subaperture compounding E(SNRe %) = 3.56, and steered compounding E(SNRe %) = 4.26).
Creating photorealistic virtual model with polarization-based vision system
NASA Astrophysics Data System (ADS)
Shibata, Takushi; Takahashi, Toru; Miyazaki, Daisuke; Sato, Yoichi; Ikeuchi, Katsushi
2005-08-01
Recently, 3D models are used in many fields such as education, medical services, entertainment, art, digital archive, etc., because of the progress of computational time and demand for creating photorealistic virtual model is increasing for higher reality. In computer vision field, a number of techniques have been developed for creating the virtual model by observing the real object in computer vision field. In this paper, we propose the method for creating photorealistic virtual model by using laser range sensor and polarization based image capture system. We capture the range and color images of the object which is rotated on the rotary table. By using the reconstructed object shape and sequence of color images of the object, parameter of a reflection model are estimated in a robust manner. As a result, then, we can make photorealistic 3D model in consideration of surface reflection. The key point of the proposed method is that, first, the diffuse and specular reflection components are separated from the color image sequence, and then, reflectance parameters of each reflection component are estimated separately. In separation of reflection components, we use polarization filter. This approach enables estimation of reflectance properties of real objects whose surfaces show specularity as well as diffusely reflected lights. The recovered object shape and reflectance properties are then used for synthesizing object images with realistic shading effects under arbitrary illumination conditions.
Image-based path planning for automated virtual colonoscopy navigation
NASA Astrophysics Data System (ADS)
Hong, Wei
2008-03-01
Virtual colonoscopy (VC) is a noninvasive method for colonic polyp screening, by reconstructing three-dimensional models of the colon using computerized tomography (CT). In virtual colonoscopy fly-through navigation, it is crucial to generate an optimal camera path for efficient clinical examination. In conventional methods, the centerline of the colon lumen is usually used as the camera path. In order to extract colon centerline, some time consuming pre-processing algorithms must be performed before the fly-through navigation, such as colon segmentation, distance transformation, or topological thinning. In this paper, we present an efficient image-based path planning algorithm for automated virtual colonoscopy fly-through navigation without the requirement of any pre-processing. Our algorithm only needs the physician to provide a seed point as the starting camera position using 2D axial CT images. A wide angle fisheye camera model is used to generate a depth image from the current camera position. Two types of navigational landmarks, safe regions and target regions are extracted from the depth images. Camera position and its corresponding view direction are then determined using these landmarks. The experimental results show that the generated paths are accurate and increase the user comfort during the fly-through navigation. Moreover, because of the efficiency of our path planning algorithm and rendering algorithm, our VC fly-through navigation system can still guarantee 30 FPS.
High-immersion three-dimensional display of the numerical computer model
NASA Astrophysics Data System (ADS)
Xing, Shujun; Yu, Xunbo; Zhao, Tianqi; Cai, Yuanfa; Chen, Duo; Chen, Zhidong; Sang, Xinzhu
2013-08-01
High-immersion three-dimensional (3D) displays making them valuable tools for many applications, such as designing and constructing desired building houses, industrial architecture design, aeronautics, scientific research, entertainment, media advertisement, military areas and so on. However, most technologies provide 3D display in the front of screens which are in parallel with the walls, and the sense of immersion is decreased. To get the right multi-view stereo ground image, cameras' photosensitive surface should be parallax to the public focus plane and the cameras' optical axes should be offset to the center of public focus plane both atvertical direction and horizontal direction. It is very common to use virtual cameras, which is an ideal pinhole camera to display 3D model in computer system. We can use virtual cameras to simulate the shooting method of multi-view ground based stereo image. Here, two virtual shooting methods for ground based high-immersion 3D display are presented. The position of virtual camera is determined by the people's eye position in the real world. When the observer stand in the circumcircle of 3D ground display, offset perspective projection virtual cameras is used. If the observer stands out the circumcircle of 3D ground display, offset perspective projection virtual cameras and the orthogonal projection virtual cameras are adopted. In this paper, we mainly discussed the parameter setting of virtual cameras. The Near Clip Plane parameter setting is the main point in the first method, while the rotation angle of virtual cameras is the main point in the second method. In order to validate the results, we use the D3D and OpenGL to render scenes of different viewpoints and generate a stereoscopic image. A realistic visualization system for 3D models is constructed and demonstrated for viewing horizontally, which provides high-immersion 3D visualization. The displayed 3D scenes are compared with the real objects in the real world.
[Virtual reality in video-assisted thoracoscopic lung segmentectomy].
Onuki, Takamasa
2009-07-01
The branching patterns of pulmonary arteries and veins vary greatly in the pulmonary hilar region and are very complicated. We attempted to reconstruct anatomically correct images using a freeware program. After uploading the images to a personal computer, bronchi, pulmonary arteries and veins were traced by moving up and down in the images and the location and thickness of the bronchi and pulmonary vasculture were indicated as different-sized cylinders. Next, based on the resulting numerical data, a 3D image was reconstructed using Metasequoia shareware. The reconstructed images can be manipulated by virtual surgical procedures such as reshaping, cutting and moving. These system would be very helpful in complicated video-assisted thoracic surgery such as lung segmentectomy.
Five years of experience teaching pathology to dental students using the WebMicroscope
2011-01-01
Background We describe development and evaluation of the user-friendly web based virtual microscopy - WebMicroscope for teaching and learning dental students basic and oral pathology. Traditional students microscopes were replaced by computer workstations. Methods The transition of the basic and oral pathology courses from light to virtual microscopy has been completed gradually over a five-year period. A pilot study was conducted in academic year 2005/2006 to estimate the feasibility of integrating virtual microscopy into a traditional light microscopy-based pathology course. The entire training set of glass slides was subsequently converted to virtual slides and placed on the WebMicroscope server. Giving access to fully digitized slides on the web with a browser and a viewer plug-in, the computer has become a perfect companion of the student. Results The study material consists now of over 400 fully digitized slides which covering 15 entities in basic and systemic pathology and 15 entities in oral pathology. Digitized slides are linked with still macro- and microscopic images, organized with clinical information into virtual cases and supplemented with text files, syllabus, PowerPoint presentations and animations on the web, serving additionally as material for individual studies. After their examinations, the students rated the use of the software, quality of the images, the ease of handling the images, and the effective use of virtual slides during the laboratory practicals. Responses were evaluated on a standardized scale. Because of the positive opinions and support from the students, the satisfaction surveys had shown a progressive improvement over the past 5 years. The WebMicroscope as a didactic tool for laboratory practicals was rated over 8 on a 1-10 scale for basic and systemic pathology and 9/10 for oral pathology especially as various students’ suggestions were implemented. Overall, the quality of the images was rated as very good. Conclusions An overwhelming majority of our students regarded a possibility of using virtual slides at their convenience as highly desirable. Our students and faculty consider the use of the virtual microscope for the study of basic as well as oral pathology as a significant improvement over the light microscope. PMID:21489183
Virtual Reality Exploration and Planning for Precision Colorectal Surgery.
Guerriero, Ludovica; Quero, Giuseppe; Diana, Michele; Soler, Luc; Agnus, Vincent; Marescaux, Jacques; Corcione, Francesco
2018-06-01
Medical software can build a digital clone of the patient with 3-dimensional reconstruction of Digital Imaging and Communication in Medicine images. The virtual clone can be manipulated (rotations, zooms, etc), and the various organs can be selectively displayed or hidden to facilitate a virtual reality preoperative surgical exploration and planning. We present preliminary cases showing the potential interest of virtual reality in colorectal surgery for both cases of diverticular disease and colonic neoplasms. This was a single-center feasibility study. The study was conducted at a tertiary care institution. Two patients underwent a laparoscopic left hemicolectomy for diverticular disease, and 1 patient underwent a laparoscopic right hemicolectomy for cancer. The 3-dimensional virtual models were obtained from preoperative CT scans. The virtual model was used to perform preoperative exploration and planning. Intraoperatively, one of the surgeons was manipulating the virtual reality model, using the touch screen of a tablet, which was interactively displayed to the surgical team. The main outcome was evaluation of the precision of virtual reality in colorectal surgery planning and exploration. In 1 patient undergoing laparoscopic left hemicolectomy, an abnormal origin of the left colic artery beginning as an extremely short common trunk from the inferior mesenteric artery was clearly seen in the virtual reality model. This finding was missed by the radiologist on CT scan. The precise identification of this vascular variant granted a safe and adequate surgery. In the remaining cases, the virtual reality model helped to precisely estimate the vascular anatomy, providing key landmarks for a safer dissection. A larger sample size would be necessary to definitively assess the efficacy of virtual reality in colorectal surgery. Virtual reality can provide an enhanced understanding of crucial anatomical details, both preoperatively and intraoperatively, which could contribute to improve safety in colorectal surgery.
Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan
2017-03-01
The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (< 4% difference) with p >.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.
Brewer, LaPrincess C; Kaihoi, Brian; Zarling, Kathleen K; Squires, Ray W; Thomas, Randal; Kopecky, Stephen
2015-04-08
Despite proven benefits through the secondary prevention of cardiovascular disease (CVD) and reduction of mortality, cardiac rehabilitation (CR) remains underutilized in cardiac patients. Underserved populations most affected by CVD including rural residents, low socioeconomic status patients, and racial/ethnic minorities have the lowest participation rates due to access barriers. Internet-and mobile-based lifestyle interventions have emerged as potential modalities to complement and increase accessibility to CR. An outpatient CR program using virtual world technology may provide an effective alternative to conventional CR by overcoming patient access limitations such as geographics, work schedule constraints, and transportation. The objective of this paper is to describe the research protocol of a two-phased, pilot study that will assess the feasibility (Phase 1) and comparative effectiveness (Phase 2) of a virtual world-based (Second Life) CR program as an extension of a conventional CR program in achieving healthy behavioral change among post-acute coronary syndrome (ACS) and post-percutaneous coronary intervention (PCI) patients. We hypothesize that virtual world CR users will improve behaviors (physical activity, diet, and smoking) to a greater degree than conventional CR participants. In Phase 1, we will recruit at least 10 patients enrolled in outpatient CR who were recently hospitalized for an ACS (unstable angina, ST-segment elevation myocardial infarction, non-ST-segment elevation myocardial infarction) or who recently underwent elective PCI at Mayo Clinic Hospital, Rochester Campus in Rochester, MN with at least one modifiable, lifestyle risk factor target (sedentary lifestyle, unhealthy diet, and current smoking). Recruited patients will participate in a 12-week, virtual world health education program which will provide feedback on the feasibility, usability, and design of the intervention. During Phase 2, we will conduct a 2-arm, parallel group, single-center, randomized controlled trial (RCT). Patients will be randomized at a 1:1 ratio to adjunct virtual world-based CR with conventional CR or conventional CR only. The primary outcome is a composite including at least one of the following (1) at least 150 minutes of physical activity per week, (2) daily consumption of five or more fruits and vegetables, and (3) smoking cessation. Patients will be assessed at 3, 6, and 12 months. The Phase 1 feasibility study is currently open for recruitment which will be followed by the Phase 2 RCT. The anticipated completion date for the study is May 2016. While research on the use of virtual world technology in health programs is in its infancy, it offers unique advantages over current Web-based health interventions including social interactivity and active learning. It also increases accessibility to vulnerable populations who have higher burdens of CVD. This study will yield results on the effectiveness of a virtual world-based CR program as an innovative platform to influence healthy lifestyle behavior and self-efficacy.
Single-phase dual-energy CT allows for characterization of renal masses as benign or malignant.
Graser, Anno; Becker, Christoph R; Staehler, Michael; Clevert, Dirk A; Macari, Michael; Arndt, Niko; Nikolaou, Konstantin; Sommer, Wieland; Stief, Christian; Reiser, Maximilian F; Johnson, Thorsten R C
2010-07-01
To evaluate the diagnostic accuracy of dual-energy CT (DECT) in renal mass characterization using a single-phase acquisition. A total of 202 patients (148 males, 54 females; 63 +/- 13 years) with ultrasound-based suspicion of a renal mass underwent unenhanced single energy and nephrographic phase DECT on a dual source scanner (Siemens Somatom Definition Dual Source, n = 174; Somatom Definition Flash, n = 28). Scan parameters for DECT were: tube potential, 80/100 and 100/Sn140 kVp; exposure, 404/300 and 96/232 effective mAs; collimation, 14 x 1.2/32 x 0.6 mm. Two abdominal radiologists assessed DECT and SECT image quality and noise on a 5-point visual analogue scale. Using solely the DE acquisition including virtual nonenhanced (VNE) and color coded iodine images that enable direct visualization of iodine, masses were characterized as benign or malignant. In a second reading session after 34 to 72 (average: 55) days, the same assessment was again performed using both the true nonenhanced (TNE) and nephrographic phase scans thereby simulating conventional single-energy CT. Sensitivities, specificities, diagnostic accuracies, and interpretation times and were recorded for both reading paradigms. Dose reduction of a single-phase over a dual-phase protocol was calculated. Results were tested for statistical significance using the paired Wilcoxon signed rank test and student t test. Differences in sensitivities were tested for significance using the McNemar test. Of the 202 patients, 115 (56.9%) underwent surgical resection of renal masses. Histopathology showed malignancy in 99 and benign tumors in 18 patients, in 48 patients (23.7%), follow-up imaging showed size stability of lesions diagnosed as benign, and 37 patients (18.3%) had no mass. Based on DECT only, 95/99 (96.0%) patients with malignancy and 96/103 (93.2%) patients without malignancy were correctly identified, for an overall accuracy of 94.6%. The dual-phase approach identified 96/99 (97.0%) and 98/103 (95.1%), accuracy 96.0%, P > 0.05 for both. Mean interpretation time was 2.2 +/- 0.8 minutes for DECT, and 3.5 +/- 1.0 minutes for the dual-phase protocol, P < 0.001. Mean VNE/TNE image quality was 1.68 +/- 0.65/1.30 +/- 0.59, noise was 2.03 +/- 0.57/1.18 +/- 0.29, P < 0.001 for both. Omission of the true unenhanced phase lead to a 48.9 +/- 7.0% dose reduction. DECT allows for fast and accurate characterization of renal masses in a single-phase acquisition. Interpretation of color coded images significantly reduces interpretation time. Omission of a nonenhanced acquisition can reduce radiation exposure by almost 50%.
Locally linear regression for pose-invariant face recognition.
Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen
2007-07-01
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.
1995-01-01
This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.
Estimation of the state of solar activity type stars by virtual observations of CrAVO
NASA Astrophysics Data System (ADS)
Dolgov, A. A.; Shlyapnikov, A. A.
2012-05-01
The results of precosseing of negatives with direct images of the sky from CrAO glass library are presented in this work, which became a part of on-line archive of the Crimean Astronomical Virtual Observatory (CrAVO). Based on the obtained data, the parameters of dwarf stars have been estimated, included in the catalog "Stars with solar-type activity" (GTSh10). The following matters are considered: searching methodology of negatives with positions of studied stars and with calculated limited magnitude; image viewing and reduction with the facilities of the International Virtual Observatory; the preliminary results of the photometry of studied objects.
Brennan, Darren D; Zamboni, Giulia; Sosna, Jacob; Callery, Mark P; Vollmer, Charles M V; Raptopoulos, Vassilios D; Kruskal, Jonathan B
2007-05-01
The purposes of this study were to combine a thorough understanding of the technical aspects of the Whipple procedure with advanced rendering techniques by introducing a virtual Whipple procedure and to evaluate the utility of this new rendering technique in prediction of the arterial variants that cross the anticipated surgical resection plane. The virtual Whipple is a novel technique that follows the complex surgical steps in a Whipple procedure. Three-dimensional reconstructed angiographic images are used to identify arterial variants for the surgeon as part of the preoperative radiologic assessment of pancreatic and ampullary tumors.
UkrVO astronomical WEB services
NASA Astrophysics Data System (ADS)
Mazhaev, A.
2017-02-01
Ukraine Virtual Observatory (UkrVO) has been a member of the International Virtual Observatory Alliance (IVOA) since 2011. The virtual observatory (VO) is not a magic solution to all problems of data storing and processing, but it provides certain standards for building infrastructure of astronomical data center. The astronomical databases help data mining and offer to users an easy access to observation metadata, images within celestial sphere and results of image processing. The astronomical web services (AWS) of UkrVO give to users handy tools for data selection from large astronomical catalogues for a relatively small region of interest in the sky. Examples of the AWS usage are showed.
Wu, Rongli; Watanabe, Yoshiyuki; Satoh, Kazuhiko; Liao, Yen-Peng; Takahashi, Hiroto; Tanaka, Hisashi; Tomiyama, Noriyuki
2018-05-21
The aim of this study was to quantitatively compare the reduction in beam hardening artifact (BHA) and variance in computed tomography (CT) numbers of virtual monochromatic energy (VME) images obtained with 3 dual-energy computed tomography (DECT) systems at a given radiation dose. Five different iodine concentrations were scanned using dual-energy and single-energy (120 kVp) modes. The BHA and CT number variance were evaluated. For higher iodine concentrations, 40 and 80 mgI/mL, BHA on VME imaging was significantly decreased when the energy was higher than 50 keV (P = 0.003) and 60 keV (P < 0.001) for GE, higher than 80 keV (P < 0.001) and 70 keV (P = 0.002) for Siemens, and higher than 40 keV (P < 0.001) and 60 keV (P < 0.001) for Toshiba, compared with single-energy CT imaging. Virtual monochromatic energy imaging can decrease BHA and improve CT number accuracy in different dual-energy computed tomography systems, depending on energy levels and iodine concentrations.
Thali, M J; Dirnhofer, R; Becker, R; Oliver, W; Potter, K
2004-10-01
The study aimed to validate magnetic resonance microscopy (MRM) studies of forensic tissue specimens (skin samples with electric injury patterns) against the results from routine histology. Computed tomography and magnetic resonance imaging are fast becoming important tools in clinical and forensic pathology. This study is the first forensic application of MRM to the analysis of electric injury patterns in human skin. Three-dimensional high-resolution MRM images of fixed skin specimens provided a complete 3D view of the damaged tissues at the site of an electric injury as well as in neighboring tissues, consistent with histologic findings. The image intensity of the dermal layer in T2-weighted MRM images was reduced in the central zone due to carbonization or coagulation necrosis and increased in the intermediate zone because of dermal edema. A subjacent blood vessel with an intravascular occlusion supports the hypothesis that current traveled through the vascular system before arcing to ground. High-resolution imaging offers a noninvasive alternative to conventional histology in forensic wound analysis and can be used to perform 3D virtual histology.
Can we use virtual reality tools in the planning of an experiment?
NASA Astrophysics Data System (ADS)
Kucaba-Pietal, Anna; Szumski, Marek; Szczerba, Piotr
2015-03-01
Virtual reality (VR) has proved to be a particularly useful tool in engineering and design. A related area of aviation in which VR is particularly significant is a flight training, as it requires many hours of practice and using real planes for all training is both expensive and more dangerous. Research conducted at the Rzeszow University of Technology (RUT) showed that virtual reality can be successfully used for planning experiment during a flight tests. Motivation to the study were a wing deformation measurements of PW-6 glider in flight by use Image Pattern Correlation Technique (IPCT) planned within the frame of AIM2 project. The tool VirlIPCT was constructed, which permits to perform virtual IPCT setup on an airplane. Using it, we can test a camera position, camera resolution, pattern application. Moreover performed tests on RUT indicate, that VirlIPCT can be used as a virtual IPCT image generator. This paper presents results of the research on VirlIPCT.
Automated flight path planning for virtual endoscopy.
Paik, D S; Beaulieu, C F; Jeffrey, R B; Rubin, G D; Napel, S
1998-05-01
In this paper, a novel technique for rapid and automatic computation of flight paths for guiding virtual endoscopic exploration of three-dimensional medical images is described. While manually planning flight paths is a tedious and time consuming task, our algorithm is automated and fast. Our method for positioning the virtual camera is based on the medial axis transform but is much more computationally efficient. By iteratively correcting a path toward the medial axis, the necessity of evaluating simple point criteria during morphological thinning is eliminated. The virtual camera is also oriented in a stable viewing direction, avoiding sudden twists and turns. We tested our algorithm on volumetric data sets of eight colons, one aorta and one bronchial tree. The algorithm computed the flight paths in several minutes per volume on an inexpensive workstation with minimal computation time added for multiple paths through branching structures (10%-13% per extra path). The results of our algorithm are smooth, centralized paths that aid in the task of navigation in virtual endoscopic exploration of three-dimensional medical images.
Virtual arthroscopy of the visible human female temporomandibular joint.
Ishimaru, T; Lew, D; Haller, J; Vannier, M W
1999-07-01
This study was designed to obtain views of the temporomandibular joint (TMJ) by means of computed arthroscopic simulation (virtual arthroscopy) using three-dimensional (3D) processing. Volume renderings of the TMJ from very thin cryosection slices of the Visible Human Female were taken off the Internet. Analyze(AVW) software (Biomedical Imaging Resource, Mayo Foundation, Rochester, MN) on a Silicon Graphics 02 workstation (Mountain View, CA) was then used to obtain 3D images and allow the navigation "fly-through" of the simulated joint. Good virtual arthroscopic views of the upper and lower joint spaces of both TMJs were obtained by fly-through simulation from the lateral and endaural sides. It was possible to observe the presence of a partial defect in the articular disc and an osteophyte on the condyle. Virtual arthroscopy provided visualization of regions not accessible to real arthroscopy. These results indicate that virtual arthroscopy will be a new technique to investigate the TMJ of the patient with TMJ disorders in the near future.
V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis
NASA Astrophysics Data System (ADS)
Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.
2011-09-01
In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.
NASA Astrophysics Data System (ADS)
Abercrombie, S. P.; Menzies, A.; Goddard, C.
2017-12-01
Virtual and augmented reality enable scientists to visualize environments that are very difficult, or even impossible to visit, such as the surface of Mars. A useful immersive visualization begins with a high quality reconstruction of the environment under study. This presentation will discuss a photogrammetry pipeline developed at the Jet Propulsion Laboratory to reconstruct 3D models of the surface of Mars using stereo images sent back to Earth by the Curiosity Mars rover. The resulting models are used to support a virtual reality tool (OnSight) that allows scientists and engineers to visualize the surface of Mars as if they were standing on the red planet. Images of Mars present challenges to existing scene reconstruction solutions. Surface images of Mars are sparse with minimal overlap, and are often taken from extremely different viewpoints. In addition, the specialized cameras used by Mars rovers are significantly different than consumer cameras, and GPS localization data is not available on Mars. This presentation will discuss scene reconstruction with an emphasis on coping with limited input data, and on creating models suitable for rendering in virtual reality at high frame rate.
Dual-Energy Computed Tomography in Cardiothoracic Vascular Imaging.
De Santis, Domenico; Eid, Marwen; De Cecco, Carlo N; Jacobs, Brian E; Albrecht, Moritz H; Varga-Szemes, Akos; Tesche, Christian; Caruso, Damiano; Laghi, Andrea; Schoepf, Uwe Joseph
2018-07-01
Dual energy computed tomography is becoming increasingly widespread in clinical practice. It can expand on the traditional density-based data achievable with single energy computed tomography by adding novel applications to help reach a more accurate diagnosis. The implementation of this technology in cardiothoracic vascular imaging allows for improved image contrast, metal artifact reduction, generation of virtual unenhanced images, virtual calcium subtraction techniques, cardiac and pulmonary perfusion evaluation, and plaque characterization. The improved diagnostic performance afforded by dual energy computed tomography is not associated with an increased radiation dose. This review provides an overview of dual energy computed tomography cardiothoracic vascular applications. Copyright © 2018 Elsevier Inc. All rights reserved.
The Use of Virtual Reality Tools in the Reading-Language Arts Classroom
ERIC Educational Resources Information Center
Pilgrim, J. Michael; Pilgrim, Jodi
2016-01-01
This article presents virtual reality as a tool for classroom literacy instruction. Building on the traditional use of images as a way to scaffold prior knowledge, we extend this idea to share ways virtual reality enables experiential learning through field trip-like experiences. The use of technology tools such Google Street view, Google…
Balogh, Attila; Czigléczki, Gábor; Papal, Zsolt; Preul, Mark C; Banczerowski, Péter
2014-11-30
There is an increased need for new digital education tools in neurosurgical training. Illustrated textbooks offer anatomic and technical reference but do not substitute hands-on experience provided by surgery or cadaver dissection. Due to limited availability of cadaver dissections the need for development of simulation tools has been augmented. We explored simulation technology for producing virtual reality-like reconstructions of simulated surgical approaches on cadaver. Practical application of the simulation tool has been presented through frontotemporal transsylvian exposure. The dissections were performed on two cadaveric heads. Arteries and veins were prepared and injected with colorful silicon rubber. The heads were rigidly fixed in Mayfield headholder. A robotic microscope with two digital cameras in inverted cone method of image acquisition was used to capture images around a pivot point in several phases of dissections. Multilayered, high-resolution images have been built into interactive 4D environment by custom developed software. We have developed the simulation module of the frontotemporal transsylvian approach. The virtual specimens can be rotated or tilted to any selected angles and examined from different surgical perspectives at any stage of dissections. Important surgical issues such as appropriate head positioning or surgical maneuvers to expose deep situated neuroanatomic structures can be simulated and studied by using the module. The simulation module of the frontotemporal transsylvian exposure helps to examine effect of head positioning on the visibility of deep situated neuroanatomic structures and study surgical maneuvers required to achieve optimal exposure of deep situated anatomic structures. The simulation program is a powerful tool to study issues of preoperative planning and well suited for neurosurgical training.
78 FR 25752 - National Institute of Biomedical Imaging and Bioengineering; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-02
..., Two Democracy Plaza, 951, 6707 Democracy Boulevard, Bethesda, MD 20892, (Virtual Meeting). Contact..., Two Democracy Plaza, Suite 920, 6707 Democracy Boulevard, Bethesda, MD 20892, (Virtual Meeting...
78 FR 37557 - National Institute of Biomedical Imaging and Bioengineering; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-21
... (Virtual Meeting). Contact Person: Ruth Grossman, DDS, Scientific Review Officer, National Institute of... Plaza, Suite 920, 6707 Democracy Boulevard, Bethesda, MD 20892 (Virtual Meeting). Contact Person: Ruth...
Fly-through viewpoint video system for multi-view soccer movie using viewpoint interpolation
NASA Astrophysics Data System (ADS)
Inamoto, Naho; Saito, Hideo
2003-06-01
This paper presents a novel method for virtual view generation that allows viewers to fly through in a real soccer scene. A soccer match is captured by multiple cameras at a stadium and images of arbitrary viewpoints are synthesized by view-interpolation of two real camera images near the given viewpoint. In the proposed method, cameras do not need to be strongly calibrated, but epipolar geometry between the cameras is sufficient for the view-interpolation. Therefore, it can easily be applied to a dynamic event even in a large space, because the efforts for camera calibration can be reduced. A soccer scene is classified into several regions and virtual view images are generated based on the epipolar geometry in each region. Superimposition of the images completes virtual views for the whole soccer scene. An application for fly-through observation of a soccer match is introduced as well as the algorithm of the view-synthesis and experimental results..
Application of image processing to calculate the number of fish seeds using raspberry-pi
NASA Astrophysics Data System (ADS)
Rahmadiansah, A.; Kusumawardhani, A.; Duanto, F. N.; Qoonita, F.
2018-03-01
Many fish cultivator in Indonesia who suffered losses due to the sale and purchase of fish seeds did not match the agreed amount. The loss is due to the calculation of fish seed still using manual method. To overcome these problems, then in this study designed fish counting system automatically and real-time fish using the image processing based on Raspberry Pi. Used image processing because it can calculate moving objects and eliminate noise. Image processing method used to calculate moving object is virtual loop detector or virtual detector method and the approach used is “double difference image”. The “double difference” approach uses information from the previous frame and the next frame to estimate the shape and position of the object. Using these methods and approaches, the results obtained were quite good with an average error of 1.0% for 300 individuals in a test with a virtual detector width of 96 pixels and a slope of 1 degree test plane.
A Downloadable Three-Dimensional Virtual Model of the Visible Ear
Wang, Haobing; Merchant, Saumil N.; Sorensen, Mads S.
2008-01-01
Purpose To develop a three-dimensional (3-D) virtual model of a human temporal bone and surrounding structures. Methods A fresh-frozen human temporal bone was serially sectioned and digital images of the surface of the tissue block were recorded (the ‘Visible Ear’). The image stack was resampled at a final resolution of 50 × 50 × 50/100 µm/voxel, registered in custom software and segmented in PhotoShop® 7.0. The segmented image layers were imported into Amira® 3.1 to generate smooth polygonal surface models. Results The 3-D virtual model presents the structures of the middle, inner and outer ears in their surgically relevant surroundings. It is packaged within a cross-platform freeware, which allows for full rotation, visibility and transparency control, as well as the ability to slice the 3-D model open at any section. The appropriate raw image can be superimposed on the cleavage plane. The model can be downloaded at https://research.meei.harvard.edu/Otopathology/3dmodels/ PMID:17124433
Efficient system modeling for a small animal PET scanner with tapered DOI detectors.
Zhang, Mengxi; Zhou, Jian; Yang, Yongfeng; Rodríguez-Villafuerte, Mercedes; Qi, Jinyi
2016-01-21
A prototype small animal positron emission tomography (PET) scanner for mouse brain imaging has been developed at UC Davis. The new scanner uses tapered detector arrays with depth of interaction (DOI) measurement. In this paper, we present an efficient system model for the tapered PET scanner using matrix factorization and a virtual scanner geometry. The factored system matrix mainly consists of two components: a sinogram blurring matrix and a geometrical matrix. The geometric matrix is based on a virtual scanner geometry. The sinogram blurring matrix is estimated by matrix factorization. We investigate the performance of different virtual scanner geometries. Both simulation study and real data experiments are performed in the fully 3D mode to study the image quality under different system models. The results indicate that the proposed matrix factorization can maintain image quality while substantially reduce the image reconstruction time and system matrix storage cost. The proposed method can be also applied to other PET scanners with DOI measurement.
Virtual Tour Environment of Cuba's National School of Art
NASA Astrophysics Data System (ADS)
Napolitano, R. K.; Douglas, I. P.; Garlock, M. E.; Glisic, B.
2017-08-01
Innovative technologies have enabled new opportunities for collecting, analyzing, and sharing information about cultural heritage sites. Through a combination of two of these technologies, spherical imaging and virtual tour environment, we preliminarily documented one of Cuba's National Schools of Art, the National Ballet School.The Ballet School is one of the five National Art Schools built in Havana, Cuba after the revolution. Due to changes in the political climate, construction was halted on the schools before completion. The Ballet School in particular was partially completed but never used for the intended purpose. Over the years, the surrounding vegetation and environment have started to overtake the buildings; damages such as missing bricks, corroded rebar, and broken tie bars can be seen. We created a virtual tour through the Ballet School which highlights key satellite classrooms and the main domed performance spaces. Scenes of the virtual tour were captured utilizing the Ricoh Theta S spherical imaging camera and processed with Kolor Panotour virtual environment software. Different forms of data can be included in this environment in order to provide a user with pertinent information. Image galleries, hyperlinks to websites, videos, PDFs, and links to databases can be embedded within the scene and interacted with by a user. By including this information within the virtual tour, a user can better understand how the site was constructed as well as the existing types of damage. The results of this work are recommendations for how a site can be preliminarily documented and information can be initially organized and shared.
Zhang, Nan; Liu, Shuguang; Hu, Zhiai; Hu, Jing; Zhu, Songsong; Li, Yunfeng
2016-08-01
This study aims to evaluate the accuracy of virtual surgical planning in two-jaw orthognathic surgery via quantitative comparison of preoperative planned and postoperative actual skull models. Thirty consecutive patients who required two-jaw orthognathic surgery were included. A composite skull model was reconstructed by using Digital Imaging and Communications in Medicine (DICOM) data from spiral computed tomography (CT) and STL (stereolithography) data from surface scanning of the dental arch. LeFort I osteotomy of the maxilla and bilateral sagittal split ramus osteotomy (of the mandible were simulated by using Dolphin Imaging 11.7 Premium (Dolphin Imaging and Management Solutions, Chatsworth, CA). Genioplasty was performed, if indicated. The virtual plan was then transferred to the operation room by using three-dimensional (3-D)-printed surgical templates. Linear and angular differences between virtually simulated and postoperative skull models were evaluated. The virtual surgical planning was successfully transferred to actual surgery with the help of 3-D-printed surgical templates. All patients were satisfied with the postoperative facial profile and occlusion. The overall mean linear difference was 0.81 mm (0.71 mm for the maxilla and 0.91 mm for the mandible); and the overall mean angular difference was 0.95 degrees. Virtual surgical planning and 3-D-printed surgical templates facilitated the diagnosis, treatment planning, and accurate repositioning of bony segments in two-jaw orthognathic surgery. Copyright © 2016 Elsevier Inc. All rights reserved.
Experimental Internet Environment Software Development
NASA Technical Reports Server (NTRS)
Maddux, Gary A.
1998-01-01
Geographically distributed project teams need an Internet based collaborative work environment or "Intranet." The Virtual Research Center (VRC) is an experimental Intranet server that combines several services such as desktop conferencing, file archives, on-line publishing, and security. Using the World Wide Web (WWW) as a shared space paradigm, the Graphical User Interface (GUI) presents users with images of a lunar colony. Each project has a wing of the colony and each wing has a conference room, library, laboratory, and mail station. In FY95, the VRC development team proved the feasibility of this shared space concept by building a prototype using a Netscape commerce server and several public domain programs. Successful demonstrations of the prototype resulted in approval for a second phase. Phase 2, documented by this report, will produce a seamlessly integrated environment by introducing new technologies such as Java and Adobe Web Links to replace less efficient interface software.
Rey, Beatriz; Rodriguez-Pujadas, Aina; Breton-Lopez, Juani; Barros-Loscertales, Alfonso; Baños, Rosa M; Botella, Cristina; Alcañiz, Mariano; Avila, Cesar
2014-01-01
Background To date, still images or videos of real animals have been used in functional magnetic resonance imaging protocols to evaluate the brain activations associated with small animals’ phobia. Objective The objective of our study was to evaluate the brain activations associated with small animals’ phobia through the use of virtual environments. This context will have the added benefit of allowing the subject to move and interact with the environment, giving the subject the illusion of being there. Methods We have analyzed the brain activation in a group of phobic people while they navigated in a virtual environment that included the small animals that were the object of their phobia. Results We have found brain activation mainly in the left occipital inferior lobe (P<.05 corrected, cluster size=36), related to the enhanced visual attention to the phobic stimuli; and in the superior frontal gyrus (P<.005 uncorrected, cluster size=13), which is an area that has been previously related to the feeling of self-awareness. Conclusions In our opinion, these results demonstrate that virtual stimulus can enhance brain activations consistent with previous studies with still images, but in an environment closer to the real situation the subject would face in their daily lives. PMID:25654753
SU-E-I-40: Phantom Research On Monochromatic Images Taken by Dual CBCT with Multiple Energy Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, R; Shandong University, Jinan, Shandong; Wang, H
Purpose: To evaluate the quality of monochromatic images at the same virtual monochromatic energy using dual cone-beam computed tomography (CBCT) with either kV/kV or MV/kV or MV/MV energy sets. Methods: CT images of Catphan 504 phantom were acquired using four different KV and MV settings: 80kV, 140kV, 4MV, 6MV. Three sets of monochromatic images were calculated: 80kV-140kV, 140kV-4MV and 4MV-6MV. Each set of CBCT images were reconstructed from the same selected virtual monochromatic energy of 1MeV. Contrast-to-Noise Ratios (CNRs) were calculated and compared between each pair of images with different energy sets. Results: Between kV/MV and MV/MV images, the CNRsmore » are comparable for all inserts. However, differences of CNRs were observed between the kV/kV and kV/MV images. Delrin’s CNR ratio between kV/kV image and kV/MV image is 1.634. LDPE’s (Low-Density Polyethylene) CNR ratio between kV/kV and kV/MV images is 0.509. Polystyrene’s CNR ratio between kV/kV image and kV/MV image is 2.219. Conclusion: Preliminary results indicated that the CNRs calculated from CBCT images reconstructed from either kV/MV projections or MV/MV projections for the same selected virtual monochromatic energy may be comparable.« less
Tomographic techniques for the study of exceptionally preserved fossils
Sutton, Mark D
2008-01-01
Three-dimensional fossils, especially those preserving soft-part anatomy, are a rich source of palaeontological information; they can, however, be difficult to work with. Imaging of serial planes through an object (tomography) allows study of both the inside and outside of three-dimensional fossils. Tomography may be performed using physical grinding or sawing coupled with photography, through optical techniques of serial focusing, or using a variety of scanning technologies such as neutron tomography, magnetic resonance imaging and most usefully X-ray computed tomography. This latter technique is applicable at a variety of scales, and when combined with a synchrotron X-ray source can produce very high-quality data that may be augmented by phase-contrast information to enhance contrast. Tomographic data can be visualized in several ways, the most effective of which is the production of isosurface-based ‘virtual fossils’ that can be manipulated and dissected interactively. PMID:18426749
Involving People with Autism in Development of Virtual World for Provision of Skills Training
ERIC Educational Resources Information Center
Politis, Yurgos; Olivia, Louis; Olivia, Thomas; Sung, Connie
2017-01-01
This paper presents the development phase of the of the Virtual World that is going to be used by the Virtual Learning for People with Autistic Spectrum Disorder (VL4ASD) project, which aims to create training materials on conversation skills. This project is geared towards addressing the communication deficits of ASD populations, by exploring the…
Virtual reality in surgery and medicine.
Chinnock, C
1994-01-01
This report documents the state of development of enhanced and virtual reality-based systems in medicine. Virtual reality systems seek to simulate a surgical procedure in a computer-generated world in order to improve training. Enhanced reality systems seek to augment or enhance reality by providing improved imaging alternatives for specific patient data. Virtual reality represents a paradigm shift in the way we teach and evaluate the skills of medical personnel. Driving the development of virtual reality-based simulators is laparoscopic abdominal surgery, where there is a perceived need for better training techniques; within a year, systems will be fielded for second-year residency students. Further refinements over perhaps the next five years should allow surgeons to evaluate and practice new techniques in a simulator before using them on patients. Technical developments are rapidly improving the realism of these machines to an amazing degree, as well as bringing the price down to affordable levels. In the next five years, many new anatomical models, procedures, and skills are likely to become available on simulators. Enhanced reality systems are generally being developed to improve visualization of specific patient data. Three-dimensional (3-D) stereovision systems for endoscopic applications, head-mounted displays, and stereotactic image navigation systems are being fielded now, with neurosurgery and laparoscopic surgery being major driving influences. Over perhaps the next five years, enhanced and virtual reality systems are likely to merge. This will permit patient-specific images to be used on virtual reality simulators or computer-generated landscapes to be input into surgical visualization instruments. Percolating all around these activities are developments in robotics and telesurgery. An advanced information infrastructure eventually will permit remote physicians to share video, audio, medical records, and imaging data with local physicians in real time. Surgical robots are likely to be deployed for specific tasks in the operating room (OR) and to support telesurgery applications. Technical developments in robotics and motion control are key components of many virtual reality systems. Since almost all of the virtual reality and enhanced reality systems will be digitally based, they are also capable of being put "on-line" for tele-training, consulting, and even surgery. Advancements in virtual and enhanced reality systems will be driven in part by consumer applications of this technology. Many of the companies that will supply systems for medical applications are also working on commercial products. A big consumer hit can benefit the entire industry by increasing volumes and bringing down costs.(ABSTRACT TRUNCATED AT 400 WORDS)
Ueguchi, Takashi; Ogihara, Ryota; Yamada, Sachiko
2018-03-21
To investigate the accuracy of dual-energy virtual monochromatic computed tomography (CT) numbers obtained by two typical hardware and software implementations: the single-source projection-based method and the dual-source image-based method. A phantom with different tissue equivalent inserts was scanned with both single-source and dual-source scanners. A fast kVp-switching feature was used on the single-source scanner, whereas a tin filter was used on the dual-source scanner. Virtual monochromatic CT images of the phantom at energy levels of 60, 100, and 140 keV were obtained by both projection-based (on the single-source scanner) and image-based (on the dual-source scanner) methods. The accuracy of virtual monochromatic CT numbers for all inserts was assessed by comparing measured values to their corresponding true values. Linear regression analysis was performed to evaluate the dependency of measured CT numbers on tissue attenuation, method, and their interaction. Root mean square values of systematic error over all inserts at 60, 100, and 140 keV were approximately 53, 21, and 29 Hounsfield unit (HU) with the single-source projection-based method, and 46, 7, and 6 HU with the dual-source image-based method, respectively. Linear regression analysis revealed that the interaction between the attenuation and the method had a statistically significant effect on the measured CT numbers at 100 and 140 keV. There were attenuation-, method-, and energy level-dependent systematic errors in the measured virtual monochromatic CT numbers. CT number reproducibility was comparable between the two scanners, and CT numbers had better accuracy with the dual-source image-based method at 100 and 140 keV. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Rallis, Austin; Fercho, Kelene A; Bosch, Taylor J; Baugh, Lee A
2018-01-31
Tool use is associated with three visual streams-dorso-dorsal, ventro-dorsal, and ventral visual streams. These streams are involved in processing online motor planning, action semantics, and tool semantics features, respectively. Little is known about the way in which the brain represents virtual tools. To directly assess this question, a virtual tool paradigm was created that provided the ability to manipulate tool components in isolation of one another. During functional magnetic resonance imaging (fMRI), adult participants performed a series of virtual tool manipulation tasks in which vision and movement kinematics of the tool were manipulated. Reaction time and hand movement direction were monitored while the tasks were performed. Functional imaging revealed that activity within all three visual streams was present, in a similar pattern to what would be expected with physical tool use. However, a previously unreported network of right-hemisphere activity was found including right inferior parietal lobule, middle and superior temporal gyri and supramarginal gyrus - regions well known to be associated with tool processing within the left hemisphere. These results provide evidence that both virtual and physical tools are processed within the same brain regions, though virtual tools recruit bilateral tool processing regions to a greater extent than physical tools. Copyright © 2017 Elsevier Ltd. All rights reserved.
Rowe, Daniel B; Bruce, Iain P; Nencka, Andrew S; Hyde, James S; Kociuba, Mary C
2016-04-01
Achieving a reduction in scan time with minimal inter-slice signal leakage is one of the significant obstacles in parallel MR imaging. In fMRI, multiband-imaging techniques accelerate data acquisition by simultaneously magnetizing the spatial frequency spectrum of multiple slices. The SPECS model eliminates the consequential inter-slice signal leakage from the slice unaliasing, while maintaining an optimal reduction in scan time and activation statistics in fMRI studies. When the combined k-space array is inverse Fourier reconstructed, the resulting aliased image is separated into the un-aliased slices through a least squares estimator. Without the additional spatial information from a phased array of receiver coils, slice separation in SPECS is accomplished with acquired aliased images in shifted FOV aliasing pattern, and a bootstrapping approach of incorporating reference calibration images in an orthogonal Hadamard pattern. The aliased slices are effectively separated with minimal expense to the spatial and temporal resolution. Functional activation is observed in the motor cortex, as the number of aliased slices is increased, in a bilateral finger tapping fMRI experiment. The SPECS model incorporates calibration reference images together with coefficients of orthogonal polynomials into an un-aliasing estimator to achieve separated images, with virtually no residual artifacts and functional activation detection in separated images. Copyright © 2015 Elsevier Inc. All rights reserved.
Synthesized view comparison method for no-reference 3D image quality assessment
NASA Astrophysics Data System (ADS)
Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun
2018-04-01
We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.
Accessing Multi-Dimensional Images and Data Cubes in the Virtual Observatory
NASA Astrophysics Data System (ADS)
Tody, Douglas; Plante, R. L.; Berriman, G. B.; Cresitello-Dittmar, M.; Good, J.; Graham, M.; Greene, G.; Hanisch, R. J.; Jenness, T.; Lazio, J.; Norris, P.; Pevunova, O.; Rots, A. H.
2014-01-01
Telescopes across the spectrum are routinely producing multi-dimensional images and datasets, such as Doppler velocity cubes, polarization datasets, and time-resolved “movies.” Examples of current telescopes producing such multi-dimensional images include the JVLA, ALMA, and the IFU instruments on large optical and near-infrared wavelength telescopes. In the near future, both the LSST and JWST will also produce such multi-dimensional images routinely. High-energy instruments such as Chandra produce event datasets that are also a form of multi-dimensional data, in effect being a very sparse multi-dimensional image. Ensuring that the data sets produced by these telescopes can be both discovered and accessed by the community is essential and is part of the mission of the Virtual Observatory (VO). The Virtual Astronomical Observatory (VAO, http://www.usvao.org/), in conjunction with its international partners in the International Virtual Observatory Alliance (IVOA), has developed a protocol and an initial demonstration service designed for the publication, discovery, and access of arbitrarily large multi-dimensional images. The protocol describing multi-dimensional images is the Simple Image Access Protocol, version 2, which provides the minimal set of metadata required to characterize a multi-dimensional image for its discovery and access. A companion Image Data Model formally defines the semantics and structure of multi-dimensional images independently of how they are serialized, while providing capabilities such as support for sparse data that are essential to deal effectively with large cubes. A prototype data access service has been deployed and tested, using a suite of multi-dimensional images from a variety of telescopes. The prototype has demonstrated the capability to discover and remotely access multi-dimensional data via standard VO protocols. The prototype informs the specification of a protocol that will be submitted to the IVOA for approval, with an operational data cube service to be delivered in mid-2014. An associated user-installable VO data service framework will provide the capabilities required to publish VO-compatible multi-dimensional images or data cubes.
Intraoperative virtual brain counseling
NASA Astrophysics Data System (ADS)
Jiang, Zhaowei; Grosky, William I.; Zamorano, Lucia J.; Muzik, Otto; Diaz, Fernando
1997-06-01
Our objective is to offer online real-tim e intelligent guidance to the neurosurgeon. Different from traditional image-guidance technologies that offer intra-operative visualization of medical images or atlas images, virtual brain counseling goes one step further. It can distinguish related brain structures and provide information about them intra-operatively. Virtual brain counseling is the foundation for surgical planing optimization and on-line surgical reference. It can provide a warning system that alerts the neurosurgeon if the chosen trajectory will pass through eloquent brain areas. In order to fulfill this objective, tracking techniques are involved for intra- operativity. Most importantly, a 3D virtual brian environment, different from traditional 3D digitized atlases, is an object-oriented model of the brain that stores information about different brain structures together with their elated information. An object-oriented hierarchical hyper-voxel space (HHVS) is introduced to integrate anatomical and functional structures. Spatial queries based on position of interest, line segment of interest, and volume of interest are introduced in this paper. The virtual brain environment is integrated with existing surgical pre-planning and intra-operative tracking systems to provide information for planning optimization and on-line surgical guidance. The neurosurgeon is alerted automatically if the planned treatment affects any critical structures. Architectures such as HHVS and algorithms, such as spatial querying, normalizing, and warping are presented in the paper. A prototype has shown that the virtual brain is intuitive in its hierarchical 3D appearance. It also showed that HHVS, as the key structure for virtual brain counseling, efficiently integrates multi-scale brain structures based on their spatial relationships.This is a promising development for optimization of treatment plans and online surgical intelligent guidance.
Generating Contextual Descriptions of Virtual Reality (VR) Spaces
NASA Astrophysics Data System (ADS)
Olson, D. M.; Zaman, C. H.; Sutherland, A.
2017-12-01
Virtual reality holds great potential for science communication, education, and research. However, interfaces for manipulating data and environments in virtual worlds are limited and idiosyncratic. Furthermore, speech and vision are the primary modalities by which humans collect information about the world, but the linking of visual and natural language domains is a relatively new pursuit in computer vision. Machine learning techniques have been shown to be effective at image and speech classification, as well as at describing images with language (Karpathy 2016), but have not yet been used to describe potential actions. We propose a technique for creating a library of possible context-specific actions associated with 3D objects in immersive virtual worlds based on a novel dataset generated natively in virtual reality containing speech, image, gaze, and acceleration data. We will discuss the design and execution of a user study in virtual reality that enabled the collection and the development of this dataset. We will also discuss the development of a hybrid machine learning algorithm linking vision data with environmental affordances in natural language. Our findings demonstrate that it is possible to develop a model which can generate interpretable verbal descriptions of possible actions associated with recognized 3D objects within immersive VR environments. This suggests promising applications for more intuitive user interfaces through voice interaction within 3D environments. It also demonstrates the potential to apply vast bodies of embodied and semantic knowledge to enrich user interaction within VR environments. This technology would allow for applications such as expert knowledge annotation of 3D environments, complex verbal data querying and object manipulation in virtual spaces, and computer-generated, dynamic 3D object affordances and functionality during simulations.
de Boer, I R; Wesselink, P R; Vervoorn, J M
2013-11-01
To describe the development and opportunities for implementation of virtual teeth with and without pathology for use in a virtual learning environment in dental education. The creation of virtual teeth begins by scanning a tooth with a cone beam CT. The resulting scan consists of multiple two-dimensional grey-scale images. The specially designed software program ColorMapEditor connects these two-dimensional images to create a three-dimensional tooth. With this software, any aspect of the tooth can be modified, including its colour, volume, shape and density, resulting in the creation of virtual teeth of any type. This article provides examples of realistic virtual teeth with and without pathology that can be used for dental education. ColorMapEditor offers infinite possibilities to adjust and add options for the optimisation of virtual teeth. Virtual teeth have unlimited availability for dental students, allowing them to practise as often as required. Virtual teeth can be made and adjusted to any shape with any type of pathology. Further developments in software and hardware technology are necessary to refine the ability to colour and shape the interior of the pulp chamber and surface of the tooth to enable not only treatment but also diagnostics and thus create a greater degree of realism. The creation and use of virtual teeth in dental education appears to be feasible but is still in development; it offers many opportunities for the creation of teeth with various pathologies, although an evaluation of its use in dental education is still required. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
HPPD: ligand- and target-based virtual screening on a herbicide target.
López-Ramos, Miriam; Perruccio, Francesca
2010-05-24
Hydroxyphenylpyruvate dioxygenase (HPPD) has proven to be a very successful target for the development of herbicides with bleaching properties, and today HPPD inhibitors are well established in the agrochemical market. Syngenta has a long history of HPPD-inhibitor research, and HPPD was chosen as a case study for the validation of diverse ligand- and target-based virtual screening approaches to identify compounds with inhibitory properties. Two-dimensional extended connectivity fingerprints, three-dimensional shape-based tools (ROCS, EON, and Phase-shape) and a pharmacophore approach (Phase) were used as ligand-based methods; Glide and Gold were used as target-based. Both the virtual screening utility and the scaffold-hopping ability of the screening tools were assessed. Particular emphasis was put on the specific pitfalls to take into account for the design of a virtual screening campaign in an agrochemical context, as compared to a pharmaceutical environment.
LeRouge, Cynthia; Dickhut, Kathryn; Lisetti, Christine; Sangameswaran, Savitha; Malasanos, Toree
2016-01-01
This research focuses on the potential ability of animated avatars (a digital representation of the user) and virtual agents (a digital representation of a coach, buddy, or teacher) to deliver computer-based interventions for adolescents' chronic weight management. An exploration of the acceptance and desire of teens to interact with avatars and virtual agents for self-management and behavioral modification was undertaken. The utilized approach was inspired by community-based participatory research. Data was collected from 2 phases: Phase 1) focus groups with teens, provider interviews, parent interviews; and Phase 2) mid-range prototype assessment by teens and providers. Data from all stakeholder groups expressed great interest in avatars and virtual agents assisting self-management efforts. Adolescents felt the avatars and virtual agents could: 1) reinforce guidance and support, 2) fit within their lifestyle, and 3) help set future goals, particularly after witnessing the effect of their current behavior(s) on the projected physical appearance (external and internal organs) of avatars. Teens wanted 2 virtual characters: a virtual agent to act as a coach or teacher and an avatar (extension of themselves) to serve as a "buddy" for empathic support and guidance and as a surrogate for rewards. Preferred modalities for use include both mobile devices to accommodate access and desktop to accommodate preferences for maximum screen real estate to support virtualization of functions that are more contemplative and complex (e.g., goal setting). Adolescents expressed a desire for limited co-user access, which they could regulate. Data revealed certain barriers and facilitators that could affect adoption and use. The current study extends the support of teens, parents, and providers for adding avatars or virtual agents to traditional computer-based interactions. Data supports the desire for a personal relationship with a virtual character in support of previous studies. The study provides a foundation for further work in the area of avatar-driven motivational interviewing. This study provides evidence supporting the use of avatars and virtual agents, designed using participatory approaches, to be included in the continuum of care. Increased probability of engagement and long-term retention of overweight, obese adolescent users and suggests expanding current chronic care models toward more comprehensive, socio-technical representations. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Practical design and evaluation methods of omnidirectional vision sensors
NASA Astrophysics Data System (ADS)
Ohte, Akira; Tsuzuki, Osamu
2012-01-01
A practical omnidirectional vision sensor, consisting of a curved mirror, a mirror-supporting structure, and a megapixel digital imaging system, can view a field of 360 deg horizontally and 135 deg vertically. The authors theoretically analyzed and evaluated several curved mirrors, namely, a spherical mirror, an equidistant mirror, and a single viewpoint mirror (hyperboloidal mirror). The focus of their study was mainly on the image-forming characteristics, position of the virtual images, and size of blur spot images. The authors propose here a practical design method that satisfies the required characteristics. They developed image-processing software for converting circular images to images of the desired characteristics in real time. They also developed several prototype vision sensors using spherical mirrors. Reports dealing with virtual images and blur-spot size of curved mirrors are few; therefore, this paper will be very useful for the development of omnidirectional vision sensors.
Dispersive Phase in the L-band InSAR Image Associated with Heavy Rain Episodes
NASA Astrophysics Data System (ADS)
Furuya, M.; Kinoshita, Y.
2017-12-01
Interferometric synthetic aperture radar (InSAR) is a powerful geodetic technique that allows us to detect ground displacements with unprecedented spatial resolution, and has been used to detect displacements due to earthquakes, volcanic eruptions, and glacier motion. In the meantime, due to the microwave propagation through ionosphere and troposphere, we often encounter non-negligible phase anomaly in InSAR data. Correcting for the ionsphere and troposphere is therefore a long-standing issue for high-precision geodetic measurements. However, if ground displacements are negligible, InSAR image can tell us the details of the atmosphere.Kinoshita and Furuya (2017, SOLA) detected phase anomaly in ALOS/PALSAR InSAR data associated with heavy rain over Niigata area, Japan, and performed numerical weathr model simulation to reproduce the anomaly; ALOS/PALSAR is a satellite-based L-band SAR sensor launched by JAXA in 2006 and terminated in 2011. The phase anomaly could be largely reproduced, using the output data from the weather model. However, we should note that numerical weather model outputs can only account for the non-dispersive effect in the phase anomaly. In case of severe weather event, we may expect dispersive effect that could be caused by the presence of free-electrons.In Global Navigation Satellite System (GNSS) positioning, dual frequency measurements allow us to separate the ionospheric dispersive component from tropospheric non-dispersive components. In contrast, SAR imaging is based on a single carrier frequency, and thus no operational ionospheric corrections have been performed in InSAR data analyses. Recently, Gomba et al (2016) detailed the processing strategy of split spectrum method (SSM) for InSAR, which splits the finite bandwidth of the range spectrum and virtually allows for dual-frequency measurements.We apply the L-band InSAR SSM to the heavy rain episodes, in which more than 50 mm/hour precipitations were reported. We report the presence of phase anomaly in both dispersive and non-dispersive components. While the original phase anomaly turns out to be mostly due to the non-dispersive effect, we could recognize local anomalies in the dispersive component as well. We will discuss its geophysical implications, and may show several case studies.
Possibilities and Determinants of Using Low-Cost Devices in Virtual Education Applications
ERIC Educational Resources Information Center
Bun, Pawel Kazimierz; Wichniarek, Radoslaw; Górski, Filip; Grajewski, Damian; Zawadzki, Przemyslaw; Hamrol, Adam
2017-01-01
Virtual reality (VR) may be used as an innovative educational tool. However, in order to fully exploit its potential, it is essential to achieve the effect of immersion. To more completely submerge the user in a virtual environment, it is necessary to ensure that the user's actions are directly translated into the image generated by the…
NASA Technical Reports Server (NTRS)
Ross, M. D.; Montgomery, K.; Linton, S.; Cheng, R.; Smith, J.
1998-01-01
This report describes the three-dimensional imaging and virtual environment technologies developed in NASA's Biocomputation Center for scientific purposes that have now led to applications in the field of medicine. A major goal is to develop a virtual environment surgery workbench for planning complex craniofacial and breast reconstructive surgery, and for training surgeons.
Tremblay, Line; Roy-Vaillancourt, Mélina; Chebbi, Brahim; Bouchard, Stéphane; Daoust, Michael; Dénommée, Jessica; Thorpe, Moriah
2016-02-01
It is well documented that anti-fat attitudes influence the interactions individuals have with overweight people. However, testing attitudes through self-report measures is challenging. In the present study, we explore the use of a haptic virtual reality environment to physically interact with overweight virtual human (VH). We verify the hypothesis that duration and strength of virtual touch vary according to the characteristics of VH in ways similar to those encountered from interaction with real people in anti-fat attitude studies. A group of 61 participants were randomly assigned to one of the experimental conditions involving giving a virtual hug to a female or a male VH of either normal or overweight. We found significant associations between body image satisfaction and anti-fat attitudes and sex differences on these measures. We also found a significant interaction effect of the sex of the participants, sex of the VH, and the body size of the VH. Female participants hugged longer the overweight female VH than overweight male VH. Male participants hugged longer the normal-weight VH than the overweight VH. We conclude that virtual touch is a promising method of measuring attitudes, emotion and social interactions.
3D Seismic Imaging using Marchenko Methods
NASA Astrophysics Data System (ADS)
Lomas, A.; Curtis, A.
2017-12-01
Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in 2012, we have extended them to 3D media and wavefields. We show that while the wavefield effects may be more complex in 3D, Marchenko methods are still valid, and 3D images that are free of multiple-related artefacts, are a realistic possibility.
Virtual slides in peer reviewed, open access medical publication
2011-01-01
Background Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Approach Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. Technology and Performance The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Results and Perspectives Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of VS by the reader is high as well as by the authors. Of specific value are the increased confidence to and reputation of authors as well as the presented information to the reader. Additional associated functions such as access to author-owned related image collections, reader-controlled automated image measurements and image transformations are in preparation. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1232133347629819. PMID:22182763
Nagayama, Yasunori; Nakaura, Takeshi; Oda, Seitaro; Utsunomiya, Daisuke; Funama, Yoshinori; Iyama, Yuji; Taguchi, Narumi; Namimoto, Tomohiro; Yuki, Hideaki; Kidoh, Masafumi; Hirata, Kenichiro; Nakagawa, Masataka; Yamashita, Yasuyuki
2018-04-01
To evaluate the image quality and lesion conspicuity of virtual-monochromatic-imaging (VMI) with dual-layer DECT (DL-DECT) for reduced-iodine-load multiphasic-hepatic CT. Forty-five adults with renal dysfunction who had undergone hepatic DL-DECT with 300-mgI/kg were included. VMI (40-70-keV, DL-DECT-VMI) was generated at each enhancement phase. As controls, 45 matched patients undergoing standard 120-kVp protocol (120-kVp, 600-mgI/kg, and iterative reconstruction) were included. We compared the size-specific dose estimate (SSDE), image noise, CT attenuation, and contrast-to-noise ratio (CNR) between protocols. Two radiologists scored the image quality and lesion conspicuity. SSDE was significantly lower in DL-DECT group (p < 0.01). Image noise of DL-DECT-VMI was almost constant at each keV (differences of ≤15%) and equivalent to or lower than of 120-kVp. As the energy decreased, CT attenuation and CNR gradually increased; the values of 55-60 keV images were almost equivalent to those of standard 120-kVp. The highest scores for overall quality and lesion conspicuity were assigned at 40-keV followed by 45 to 55-keV, all of which were similar to or better than of 120-kVp. For multiphasic-hepatic CT with 50% iodine-load, DL-DECT-VMI at 40- to 55-keV provides equivalent or better image quality and lesion conspicuity without increasing radiation dose compared with standard 120-kVp protocol. • 40-55-keV yields optimal image quality for half-iodine-load multiphasic-hepatic CT with DL-DECT. • DL-DECT protocol decreases radiation exposure compared with 120-kVp scans with iterative reconstruction. • 40-keV images maximise conspicuity of hepatocellular carcinoma especially at hepatic-arterial phase.
Intra-operative 3D imaging system for robot-assisted fracture manipulation.
Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S
2015-01-01
Reduction is a crucial step in the treatment of broken bones. Achieving precise anatomical alignment of bone fragments is essential for a good fast healing process. Percutaneous techniques are associated with faster recovery time and lower infection risk. However, deducing intra-operatively the desired reduction position is quite challenging due to the currently available technology. The 2D nature of this technology (i.e. the image intensifier) doesn't provide enough information to the surgeon regarding the fracture alignment and rotation, which is actually a three-dimensional problem. This paper describes the design and development of a 3D imaging system for the intra-operative virtual reduction of joint fractures. The proposed imaging system is able to receive and segment CT scan data of the fracture, to generate the 3D models of the bone fragments, and display them on a GUI. A commercial optical tracker was included into the system to track the actual pose of the bone fragments in the physical space, and generate the corresponding pose relations in the virtual environment of the imaging system. The surgeon virtually reduces the fracture in the 3D virtual environment, and a robotic manipulator connected to the fracture through an orthopedic pin executes the physical reductions accordingly. The system is here evaluated through fracture reduction experiments, demonstrating a reduction accuracy of 1.04 ± 0.69 mm (translational RMSE) and 0.89 ± 0.71 ° (rotational RMSE).
Putting humans in the loop: Using crowdsourced snow information to inform water management
NASA Astrophysics Data System (ADS)
Fedorov, Roman; Giuliani, Matteo; Castelletti, Andrea; Fraternali, Piero
2016-04-01
The unprecedented availability of user generated data on the Web due to the advent of online services, social networks, and crowdsourcing, is opening new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatio-temporally dense, possibly contributing to our ability of making better decisions. In this work, we contribute a novel crowdsourcing procedure for computing virtual snow indexes from public web images, either produced by users or generated by touristic webcams, which is based on a complex architecture designed for automatically crawling content from multiple web data sources. The procedure retains only geo-tagged images containing a mountain skyline, identifies the visible peaks in each image using a public online digital terrain model, and classifies the mountain image pixels as snow or no-snow. This operation yields a snow mask per image, from which it is possible to extract time series of virtual snow indexes representing a proxy of the snow covered area. The value of the obtained virtual snow indexes is estimated in a real world water management problem. We consider the snow-dominated catchment of Lake Como, a regulated lake in Northern Italy, where snowmelt represents the most important contribution to seasonal lake storage, and we used the virtual snow indexes for informing the daily operation of the lake's dam. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance.
NASA Astrophysics Data System (ADS)
Wong, Erwin
2000-03-01
Traditional methods of linear based imaging limits the viewer to a single fixed-point perspective. By means of a single lens multiple perspective mirror system, a 360-degree representation of the area around the camera is reconstructed. This reconstruction is used overcome the limitations of a traditional camera by providing the viewer with many different perspectives. By constructing the mirror into a hemispherical surface with multiple focal lengths at various diameters on the mirror, and by placing a parabolic mirror overhead, a stereoscopic image can be extracted from the image captured by a high-resolution camera placed beneath the mirror. Image extraction and correction is made by computer processing of the image obtained by camera; the image present up to five distinguishable different viewpoints that a computer can extrapolate pseudo- perspective data from. Geometric and depth for field can be extrapolated via comparison and isolation of objects within a virtual scene post processed by the computer. Combining data with scene rendering software provides the viewer with the ability to choose a desired viewing position, multiple dynamic perspectives, and virtually constructed perspectives based on minimal existing data. An examination into the workings of the mirror relay system is provided, including possible image extrapolation and correctional methods. Generation of data and virtual interpolated and constructed data is also mentioned.
Wagner, A; Ploder, O; Enislidis, G; Truppe, M; Ewers, R
1996-04-01
Interventional video tomography (IVT), a new imaging modality, achieves virtual visualization of anatomic structures in three dimensions for intraoperative stereotactic navigation. Partial immersion into a virtual data space, which is orthotopically coregistered to the surgical field, enhances, by means of a see-through head-mounted display (HMD), the surgeon's visual perception and technique by providing visual access to nonvisual data of anatomy, physiology, and function. The presented cases document the potential of augmented reality environments in maxillofacial surgery.
Massetti, Thais; Fávero, Francis Meire; Menezes, Lilian Del Ciello de; Alvarez, Mayra Priscila Boscolo; Crocetta, Tânia Brusque; Guarnieri, Regiani; Nunes, Fátima L S; Monteiro, Carlos Bandeira de Mello; Silva, Talita Dias da
2018-04-01
To evaluate whether people with Duchenne muscular dystrophy (DMD) practicing a task in a virtual environment could improve performance given a similar task in a real environment, as well as distinguishing whether there is transference between performing the practice in virtual environment and then a real environment and vice versa. Twenty-two people with DMD were evaluated and divided into two groups. The goal was to reach out and touch a red cube. Group A began with the real task and had to touch a real object, and Group B began with the virtual task and had to reach a virtual object using the Kinect system. ANOVA showed that all participants decreased the movement time from the first (M = 973 ms) to the last block of acquisition (M = 783 ms) in both virtual and real tasks and motor learning could be inferred by the short-term retention and transfer task (with increasing distance of the target). However, the evaluation of task performance demonstrated that the virtual task provided an inferior performance when compared to the real task in all phases of the study, and there was no effect for sequence. Both virtual and real tasks promoted improvement of performance in the acquisition phase, short-term retention, and transfer. However, there was no transference of learning between environments. In conclusion, it is recommended that the use of virtual environments for individuals with DMD needs to be considered carefully.
A method for fast automated microscope image stitching.
Yang, Fan; Deng, Zhen-Sheng; Fan, Qiu-Hong
2013-05-01
Image stitching is an important technology to produce a panorama or larger image by combining several images with overlapped areas. In many biomedical researches, image stitching is highly desirable to acquire a panoramic image which represents large areas of certain structures or whole sections, while retaining microscopic resolution. In this study, we develop a fast normal light microscope image stitching algorithm based on feature extraction. At first, an algorithm of scale-space reconstruction of speeded-up robust features (SURF) was proposed to extract features from the images to be stitched with a short time and higher repeatability. Then, the histogram equalization (HE) method was employed to preprocess the images to enhance their contrast for extracting more features. Thirdly, the rough overlapping zones of the images preprocessed were calculated by phase correlation, and the improved SURF was used to extract the image features in the rough overlapping areas. Fourthly, the features were corresponded by matching algorithm and the transformation parameters were estimated, then the images were blended seamlessly. Finally, this procedure was applied to stitch normal light microscope images to verify its validity. Our experimental results demonstrate that the improved SURF algorithm is very robust to viewpoint, illumination, blur, rotation and zoom of the images and our method is able to stitch microscope images automatically with high precision and high speed. Also, the method proposed in this paper is applicable to registration and stitching of common images as well as stitching the microscope images in the field of virtual microscope for the purpose of observing, exchanging, saving, and establishing a database of microscope images. Copyright © 2013 Elsevier Ltd. All rights reserved.
[Development of a software for 3D virtual phantom design].
Zou, Lian; Xie, Zhao; Wu, Qi
2014-02-01
In this paper, we present a 3D virtual phantom design software, which was developed based on object-oriented programming methodology and dedicated to medical physics research. This software was named Magical Phan tom (MPhantom), which is composed of 3D visual builder module and virtual CT scanner. The users can conveniently construct any complex 3D phantom, and then export the phantom as DICOM 3.0 CT images. MPhantom is a user-friendly and powerful software for 3D phantom configuration, and has passed the real scene's application test. MPhantom will accelerate the Monte Carlo simulation for dose calculation in radiation therapy and X ray imaging reconstruction algorithm research.
Coral disease and health workshop: Coral Histopathology II, July 12-14, 2005
Galloway, S.B.; Woodley, Cheryl M.; McLaughlin, S.M.; Work, Thierry M.; Bochsler, V.S.; Meteyer, Carol U.; Sileo, Louis; Peters, E.C.; Kramarsky-Winters, E.; Morado, J. Frank; Parnell, P.G.; Rotstein, D.S.; Harely, R.A.; Reynolds, T.L.
2005-01-01
An exciting highlight of this meeting was provided by Professor Robert Ogilvie (MUSC Department of Cell Biology and Anatomy) when he introduced participants to a new digital technology that is revolutionizing histology and histopathology in the medical field. The Virtual Slide technology creates digital images of histological tissue sections by computer scanning actual slides in high definition and storing the images for retrieval and viewing. Virtual slides now allow any investigator with access to a computer and the web to view, search, annotate and comment on the same tissue sections in real time. Medical and veterinary slide libraries across the country are being converted into virtual slides to enhance biomedical education, research and diagnosis. The coral health and disease researchers at this workshop deem virtual slides as a significant way to increase capabilities in coral histology and a means for pathology consultations on coral disease cases on a global scale.
Using virtual reality for science mission planning: A Mars Pathfinder case
NASA Technical Reports Server (NTRS)
Kim, Jacqueline H.; Weidner, Richard J.; Sacks, Allan L.
1994-01-01
NASA's Mars Pathfinder Project requires a Ground Data System (GDS) that supports both engineering and scientific payloads with reduced mission operations staffing, and short planning schedules. Also, successful surface operation of the lander camera requires efficient mission planning and accurate pointing of the camera. To meet these challenges, a new software strategy that integrates virtual reality technology with existing navigational ancillary information and image processing capabilities. The result is an interactive workstation based applications software that provides a high resolution, 3-dimensial, stereo display of Mars as if it were viewed through the lander camera. The design, implementation strategy and parametric specification phases for the development of this software were completed, and the prototype tested. When completed, the software will allow scientists and mission planners to access simulated and actual scenes of Mars' surface. The perspective from the lander camera will enable scientists to plan activities more accurately and completely. The application will also support the sequence and command generation process and will allow testing and verification of camera pointing commands via simulation.
Super-Resolution Scanning Laser Microscopy Based on Virtually Structured Detection
Zhi, Yanan; Wang, Benquan; Yao, Xincheng
2016-01-01
Light microscopy plays a key role in biological studies and medical diagnosis. The spatial resolution of conventional optical microscopes is limited to approximately half the wavelength of the illumination light as a result of the diffraction limit. Several approaches—including confocal microscopy, stimulated emission depletion microscopy, stochastic optical reconstruction microscopy, photoactivated localization microscopy, and structured illumination microscopy—have been established to achieve super-resolution imaging. However, none of these methods is suitable for the super-resolution ophthalmoscopy of retinal structures because of laser safety issues and inevitable eye movements. We recently experimentally validated virtually structured detection (VSD) as an alternative strategy to extend the diffraction limit. Without the complexity of structured illumination, VSD provides an easy, low-cost, and phase artifact–free strategy to achieve super-resolution in scanning laser microscopy. In this article we summarize the basic principles of the VSD method, review our demonstrated single-point and line-scan super-resolution systems, and discuss both technical challenges and the potential of VSD-based instrumentation for super-resolution ophthalmoscopy of the retina. PMID:27480461
NASA Astrophysics Data System (ADS)
Mashburn, David; Wikswo, John
2007-11-01
Prevailing theories about the response of the heart to high field shocks predict that local regions of high resistivity distributed throughout the heart create multiple small virtual electrodes that hyperpolarize or depolarize tissue and lead to widespread activation. This resetting of bulk tissue is responsible for the successful functioning of cardiac defibrillators. By activating cardiac tissue with regular linear arrays of spatially alternating bipolar currents, we can simulate these potentials locally. We have studied the activation time due to distributed currents in both a 1D Beeler-Reuter model and on the surface of the whole heart, varying the strength of each source and the separation between them. By comparison with activation time data from actual field shock of a whole heart in a bath, we hope to better understand these transient virtual electrodes. Our work was done on rabbit RV using florescent optical imaging and our Phased Array Stimulator for driving the 16 current sources. Our model shows that for a total absolute current delivered to a region of tissue, the entire region activates faster if above-threshold sources are more distributed.
Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro
2015-01-01
Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed a lower degree of pseudoenhancement of renal cysts than 120-kVp images, while maintaining kidney contrast enhancement comparable to that on 120-kVp images. PMID:25881852
Slobounov, Semyon; Sebastianelli, Wayne; Newell, Karl M
2011-01-01
There is a growing concern that traditional neuropsychological (NP) testing tools are not sensitive to detecting residual brain dysfunctions in subjects suffering from mild traumatic brain injuries (MTBI). Moreover, most MTBI patients are asymptomatic based on anatomical brain imaging (CT, MRI), neurological examinations and patients' subjective reports within 10 days post-injury. Our ongoing research has documented that residual balance and visual-kinesthetic dysfunctions along with its underlying alterations of neural substrates may be detected in "asymptomatic subjects" by means of Virtual Reality (VR) graphics incorporated with brain imaging (EEG) techniques.
Virtual Rover Takes its First Turn
2004-01-13
This image shows a screenshot from the software used by engineers to drive the Mars Exploration Rover Spirit. The software simulates the rover's movements across the martian terrain, helping to plot a safe course for the rover. The virtual 3-D world around the rover is built from images taken by Spirit's stereo navigation cameras. Regions for which the rover has not yet acquired 3-D data are represented in beige. This image depicts the state of the rover before it backed up and turned 45 degrees on Sol 11 (01-13-04). http://photojournal.jpl.nasa.gov/catalog/PIA05063
T2 shuffling: Sharp, multicontrast, volumetric fast spin-echo imaging.
Tamir, Jonathan I; Uecker, Martin; Chen, Weitian; Lai, Peng; Alley, Marcus T; Vasanawala, Shreyas S; Lustig, Michael
2017-01-01
A new acquisition and reconstruction method called T 2 Shuffling is presented for volumetric fast spin-echo (three-dimensional [3D] FSE) imaging. T 2 Shuffling reduces blurring and recovers many images at multiple T 2 contrasts from a single acquisition at clinically feasible scan times (6-7 min). The parallel imaging forward model is modified to account for temporal signal relaxation during the echo train. Scan efficiency is improved by acquiring data during the transient signal decay and by increasing echo train lengths without loss in signal-to-noise ratio (SNR). By (1) randomly shuffling the phase encode view ordering, (2) constraining the temporal signal evolution to a low-dimensional subspace, and (3) promoting spatio-temporal correlations through locally low rank regularization, a time series of virtual echo time images is recovered from a single scan. A convex formulation is presented that is robust to partial voluming and radiofrequency field inhomogeneity. Retrospective undersampling and in vivo scans confirm the increase in sharpness afforded by T 2 Shuffling. Multiple image contrasts are recovered and used to highlight pathology in pediatric patients. A proof-of-principle method is integrated into a clinical musculoskeletal imaging workflow. The proposed T 2 Shuffling method improves the diagnostic utility of 3D FSE by reducing blurring and producing multiple image contrasts from a single scan. Magn Reson Med 77:180-195, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Bai, Peirui; Torigian, Drew A.
2017-03-01
Much has been published on finding landmarks on object surfaces in the context of shape modeling. While this is still an open problem, many of the challenges of past approaches can be overcome by removing the restriction that landmarks must be on the object surface. The virtual landmarks we propose may reside inside, on the boundary of, or outside the object and are tethered to the object. Our solution is straightforward, simple, and recursive in nature, proceeding from global features initially to local features in later levels to detect landmarks. Principal component analysis (PCA) is used as an engine to recursively subdivide the object region. The object itself may be represented in binary or fuzzy form or with gray values. The method is illustrated in 3D space (although it generalizes readily to spaces of any dimensionality) on four objects (liver, trachea and bronchi, and outer boundaries of left and right lungs along pleura) derived from 5 patient computed tomography (CT) image data sets of the thorax and abdomen. The virtual landmark identification approach seems to work well on different structures in different subjects and seems to detect landmarks that are homologously located in different samples of the same object. The approach guarantees that virtual landmarks are invariant to translation, scaling, and rotation of the object/image. Landmarking techniques are fundamental for many computer vision and image processing applications, and we are currently exploring the use virtual landmarks in automatic anatomy recognition and object analytics.
Benchmarking Distance Control and Virtual Drilling for Lateral Skull Base Surgery.
Voormolen, Eduard H J; Diederen, Sander; van Stralen, Marijn; Woerdeman, Peter A; Noordmans, Herke Jan; Viergever, Max A; Regli, Luca; Robe, Pierre A; Berkelbach van der Sprenkel, Jan Willem
2018-01-01
Novel audiovisual feedback methods were developed to improve image guidance during skull base surgery by providing audiovisual warnings when the drill tip enters a protective perimeter set at a distance around anatomic structures ("distance control") and visualizing bone drilling ("virtual drilling"). To benchmark the drill damage risk reduction provided by distance control, to quantify the accuracy of virtual drilling, and to investigate whether the proposed feedback methods are clinically feasible. In a simulated surgical scenario using human cadavers, 12 unexperienced users (medical students) drilled 12 mastoidectomies. Users were divided into a control group using standard image guidance and 3 groups using distance control with protective perimeters of 1, 2, or 3 mm. Damage to critical structures (sigmoid sinus, semicircular canals, facial nerve) was assessed. Neurosurgeons performed another 6 mastoidectomy/trans-labyrinthine and retro-labyrinthine approaches. Virtual errors as compared with real postoperative drill cavities were calculated. In a clinical setting, 3 patients received lateral skull base surgery with the proposed feedback methods. Users drilling with distance control protective perimeters of 3 mm did not damage structures, whereas the groups using smaller protective perimeters and the control group injured structures. Virtual drilling maximum cavity underestimations and overestimations were 2.8 ± 0.1 and 3.3 ± 0.4 mm, respectively. Feedback methods functioned properly in the clinical setting. Distance control reduced the risks of drill damage proportional to the protective perimeter distance. Errors in virtual drilling reflect spatial errors of the image guidance system. These feedback methods are clinically feasible. Copyright © 2017 Elsevier Inc. All rights reserved.
Plot of virtual surgery based on CT medical images
NASA Astrophysics Data System (ADS)
Song, Limei; Zhang, Chunbo
2009-10-01
Although the CT device can give the doctors a series of 2D medical images, it is difficult to give vivid view for the doctors to acknowledge the decrease part. In order to help the doctors to plot the surgery, the virtual surgery system is researched based on the three-dimensional visualization technique. After the disease part of the patient is scanned by the CT device, the 3D whole view will be set up based on the 3D reconstruction module of the system. TCut a part is the usually used function for doctors in the real surgery. A curve will be created on the 3D space; and some points can be added on the curve automatically or manually. The position of the point can change the shape of the cut curves. The curve can be adjusted by controlling the points. If the result of the cut function is not satisfied, all the operation can be cancelled to restart. The flexible virtual surgery gives more convenience to the real surgery. Contrast to the existing medical image process system, the virtual surgery system is added to the system, and the virtual surgery can be plotted for a lot of times, till the doctors have enough confidence to start the real surgery. Because the virtual surgery system can give more 3D information of the disease part, some difficult surgery can be discussed by the expert doctors in different city via internet. It is a useful function to understand the character of the disease part, thus to decrease the surgery risk.
NASA Astrophysics Data System (ADS)
Wapenaar, C. P. A.; Van der Neut, J.; Thorbecke, J.; Broggini, F.; Slob, E. C.; Snieder, R.
2015-12-01
Imagine one could place seismic sources and receivers at any desired position inside the earth. Since the receivers would record the full wave field (direct waves, up- and downward reflections, multiples, etc.), this would give a wealth of information about the local structures, material properties and processes in the earth's interior. Although in reality one cannot place sources and receivers anywhere inside the earth, it appears to be possible to create virtual sources and receivers at any desired position, which accurately mimics the desired situation. The underlying method involves some major steps beyond standard seismic interferometry. With seismic interferometry, virtual sources can be created at the positions of physical receivers, assuming these receivers are illuminated isotropically. Our proposed method does not need physical receivers at the positions of the virtual sources; moreover, it does not require isotropic illumination. To create virtual sources and receivers anywhere inside the earth, it suffices to record the reflection response with physical sources and receivers at the earth's surface. We do not need detailed information about the medium parameters; it suffices to have an estimate of the direct waves between the virtual-source positions and the acquisition surface. With these prerequisites, our method can create virtual sources and receivers, anywhere inside the earth, which record the full wave field. The up- and downward reflections, multiples, etc. in the virtual responses are extracted directly from the reflection response at the surface. The retrieved virtual responses form an ideal starting point for accurate seismic imaging, characterization and monitoring.
Role of post-mapping computed tomography in virtual-assisted lung mapping.
Sato, Masaaki; Nagayama, Kazuhiro; Kuwano, Hideki; Nitadori, Jun-Ichi; Anraku, Masaki; Nakajima, Jun
2017-02-01
Background Virtual-assisted lung mapping is a novel bronchoscopic preoperative lung marking technique in which virtual bronchoscopy is used to predict the locations of multiple dye markings. Post-mapping computed tomography is performed to confirm the locations of the actual markings. This study aimed to examine the accuracy of marking locations predicted by virtual bronchoscopy and elucidate the role of post-mapping computed tomography. Methods Automated and manual virtual bronchoscopy was used to predict marking locations. After bronchoscopic dye marking under local anesthesia, computed tomography was performed to confirm the actual marking locations before surgery. Discrepancies between marking locations predicted by the different methods and the actual markings were examined on computed tomography images. Forty-three markings in 11 patients were analyzed. Results The average difference between the predicted and actual marking locations was 30 mm. There was no significant difference between the latest version of the automated virtual bronchoscopy system (30.7 ± 17.2 mm) and manual virtual bronchoscopy (29.8 ± 19.1 mm). The difference was significantly greater in the upper vs. lower lobes (37.1 ± 20.1 vs. 23.0 ± 6.8 mm, for automated virtual bronchoscopy; p < 0.01). Despite this discrepancy, all targeted lesions were successfully resected using 3-dimensional image guidance based on post-mapping computed tomography reflecting the actual marking locations. Conclusions Markings predicted by virtual bronchoscopy were dislocated from the actual markings by an average of 3 cm. However, surgery was accurately performed using post-mapping computed tomography guidance, demonstrating the indispensable role of post-mapping computed tomography in virtual-assisted lung mapping.
Kumar, Joish Upendra; Kavitha, Y
2017-02-01
With the use of various surgical techniques, types of implants, the preoperative assessment of cochlear dimensions is becoming increasingly relevant prior to cochlear implantation. High resolution CISS protocol MRI gives a better assessment of membranous cochlea, cochlear nerve, and membranous labyrinth. Curved Multiplanar Reconstruction (MPR) algorithm provides better images that can be used for measuring dimensions of membranous cochlea. To ascertain the value of curved multiplanar reconstruction algorithm in high resolution 3-Dimensional T2 Weighted Gradient Echo Constructive Interference Steady State (3D T2W GRE CISS) imaging for accurate morphometry of membranous cochlea. Fourteen children underwent MRI for inner ear assessment. High resolution 3D T2W GRE CISS sequence was used to obtain images of cochlea. Curved MPR reconstruction algorithm was used to virtually uncoil the membranous cochlea on the volume images and cochlear measurements were done. Virtually uncoiled images of membranous cochlea of appropriate resolution were obtained from the volume data obtained from the high resolution 3D T2W GRE CISS images, after using curved MPR reconstruction algorithm mean membranous cochlear length in the children was 27.52 mm. Maximum apical turn diameter of membranous cochlea was 1.13 mm, mid turn diameter was 1.38 mm, basal turn diameter was 1.81 mm. Curved MPR reconstruction algorithm applied to CISS protocol images facilitates in getting appropriate quality images of membranous cochlea for accurate measurements.
Image-guided laser projection for port placement in minimally invasive surgery.
Marmurek, Jonathan; Wedlake, Chris; Pardasani, Utsav; Eagleson, Roy; Peters, Terry
2006-01-01
We present an application of an augmented reality laser projection system in which procedure-specific optimal incision sites, computed from pre-operative image acquisition, are superimposed on a patient to guide port placement in minimally invasive surgery. Tests were conducted to evaluate the fidelity of computed and measured port configurations, and to validate the accuracy with which a surgical tool-tip can be placed at an identified virtual target. A high resolution volumetric image of a thorax phantom was acquired using helical computed tomography imaging. Oriented within the thorax, a phantom organ with marked targets was visualized in a virtual environment. A graphical interface enabled marking the locations of target anatomy, and calculation of a grid of potential port locations along the intercostal rib lines. Optimal configurations of port positions and tool orientations were determined by an objective measure reflecting image-based indices of surgical dexterity, hand-eye alignment, and collision detection. Intra-operative registration of the computed virtual model and the phantom anatomy was performed using an optical tracking system. Initial trials demonstrated that computed and projected port placement provided direct access to target anatomy with an accuracy of 2 mm.
Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee
2012-05-01
Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.
ERIC Educational Resources Information Center
Woodward, John
As part of a 3-year study to identify emerging issues and trends in technology for special education, this paper addresses the possible contributions of virtual reality technology to educational services for students with disabilities. An example of the use of virtual reality in medical imaging introduces the paper and leads to a brief review of…
Qiu, L L; Li, S; Bai, Y X
2016-06-01
To develop surgical templates for orthodontic miniscrew implantation based on cone-beam CT(CBCT)three-dimensional(3D)images and to evaluate the safety and stability of implantation guided by the templates. DICOM data obtained in patients who had CBCT scans taken were processed using Mimics software, and 3D images of teeth and maxillary bone were acquired. Meanwhile, 3D images of miniscrews were acquired using Solidworks software and processed with Mimics software. Virtual position of miniscrews was determined based on 3D images of teeth, bone, and miniscrews. 3D virtual templates were designed according to the virtual implantation plans. STL files were output and the real templates were fabricated with stereolithographic appliance(SLA). Postoperative CBCT scans were used to evaluate the implantation safety and the stability of miniscrews were investigated. All the templates were positioned accurately and kept stable throughout the implantation process. No root damage was found. The deviations were(1.73±0.65)mm at the corona, and(1.28±0.82)mm at the apex, respectively. The stability of miniscrews was fairly well. Surgical templates for miniscrew implantation could be acquired based on 3D CBCT images and fabricated with SLA. Implantation guided by these templates was safe and stable.
Cabrilo, Ivan; Bijlenga, Philippe; Schaller, Karl
2014-09-01
Augmented reality technology has been used for intraoperative image guidance through the overlay of virtual images, from preoperative imaging studies, onto the real-world surgical field. Although setups based on augmented reality have been used for various neurosurgical pathologies, very few cases have been reported for the surgery of arteriovenous malformations (AVM). We present our experience with AVM surgery using a system designed for image injection of virtual images into the operating microscope's eyepiece, and discuss why augmented reality may be less appealing in this form of surgery. N = 5 patients underwent AVM resection assisted by augmented reality. Virtual three-dimensional models of patients' heads, skulls, AVM nidi, and feeder and drainage vessels were selectively segmented and injected into the microscope's eyepiece for intraoperative image guidance, and their usefulness was assessed in each case. Although the setup helped in performing tailored craniotomies, in guiding dissection and in localizing drainage veins, it did not provide the surgeon with useful information concerning feeder arteries, due to the complexity of AVM angioarchitecture. The difficulty in intraoperatively conveying useful information on feeder vessels may make augmented reality a less engaging tool in this form of surgery, and might explain its underrepresentation in the literature. Integrating an AVM's hemodynamic characteristics into the augmented rendering could make it more suited to AVM surgery.
Borehole radar interferometry revisited
Liu, Lanbo; Ma, Chunguang; Lane, John W.; Joesten, Peter K.
2014-01-01
Single-hole, multi-offset borehole-radar reflection (SHMOR) is an effective technique for fracture detection. However, commercial radar system limitations hinder the acquisition of multi-offset reflection data in a single borehole. Transforming cross-hole transmission mode radar data to virtual single-hole, multi-offset reflection data using a wave interferometric virtual source (WIVS) approach has been proposed but not fully demonstrated. In this study, we compare WIVS-derived virtual single-hole, multi-offset reflection data to real SHMOR radar reflection profiles using cross-hole and single-hole radar data acquired in two boreholes located at the University of Connecticut (Storrs, CT USA). The field data results are similar to full-waveform numerical simulations developed for a two-borehole model. The reflection from the adjacent borehole is clearly imaged by both the real and WIVS-derived virtual reflection profiles. Reflector travel-time changes induced by deviation of the two boreholes from the vertical can also be observed on the real and virtual reflection profiles. The results of this study demonstrate the potential of the WIVS approach to improve bedrock fracture imaging for hydrogeological and petroleum reservoir development applications.
NASA Astrophysics Data System (ADS)
Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin
2006-02-01
A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.
Real-time, rapidly updating severe weather products for virtual globes
NASA Astrophysics Data System (ADS)
Smith, Travis M.; Lakshmanan, Valliappa
2011-01-01
It is critical that weather forecasters are able to put severe weather information from a variety of observational and modeling platforms into a geographic context so that warning information can be effectively conveyed to the public, emergency managers, and disaster response teams. The availability of standards for the specification and transport of virtual globe data products has made it possible to generate spatially precise, geo-referenced images and to distribute these centrally created products via a web server to a wide audience. In this paper, we describe the data and methods for enabling severe weather threat analysis information inside a KML framework. The method of creating severe weather diagnosis products that are generated and translating them to KML and image files is described. We illustrate some of the practical applications of these data when they are integrated into a virtual globe display. The availability of standards for interoperable virtual globe clients has not completely alleviated the need for custom solutions. We conclude by pointing out several of the limitations of the general-purpose virtual globe clients currently available.
Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj
2008-03-01
The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.
Low cost heads-up virtual reality (HUVR) with optical tracking and haptic feedback
NASA Astrophysics Data System (ADS)
Margolis, Todd; DeFanti, Thomas A.; Dawe, Greg; Prudhomme, Andrew; Schulze, Jurgen P.; Cutchin, Steve
2011-03-01
Researchers at the University of California, San Diego, have created a new, relatively low-cost augmented reality system that enables users to touch the virtual environment they are immersed in. The Heads-Up Virtual Reality device (HUVR) couples a consumer 3D HD flat screen TV with a half-silvered mirror to project any graphic image onto the user's hands and into the space surrounding them. With his or her head position optically tracked to generate the correct perspective view, the user maneuvers a force-feedback (haptic) device to interact with the 3D image, literally 'touching' the object's angles and contours as if it was a tangible physical object. HUVR can be used for training and education in structural and mechanical engineering, archaeology and medicine as well as other tasks that require hand-eye coordination. One of the most unique characteristics of HUVR is that a user can place their hands inside of the virtual environment without occluding the 3D image. Built using open-source software and consumer level hardware, HUVR offers users a tactile experience in an immersive environment that is functional, affordable and scalable.
Face recognition based on symmetrical virtual image and original training image
NASA Astrophysics Data System (ADS)
Ke, Jingcheng; Peng, Yali; Liu, Shigang; Li, Jun; Pei, Zhao
2018-02-01
In face representation-based classification methods, we are able to obtain high recognition rate if a face has enough available training samples. However, in practical applications, we only have limited training samples to use. In order to obtain enough training samples, many methods simultaneously use the original training samples and corresponding virtual samples to strengthen the ability of representing the test sample. One is directly using the original training samples and corresponding mirror samples to recognize the test sample. However, when the test sample is nearly symmetrical while the original training samples are not, the integration of the original training and mirror samples might not well represent the test samples. To tackle the above-mentioned problem, in this paper, we propose a novel method to obtain a kind of virtual samples which are generated by averaging the original training samples and corresponding mirror samples. Then, the original training samples and the virtual samples are integrated to recognize the test sample. Experimental results on five face databases show that the proposed method is able to partly overcome the challenges of the various poses, facial expressions and illuminations of original face image.
Integrated Data Visualization and Virtual Reality Tool
NASA Technical Reports Server (NTRS)
Dryer, David A.
1998-01-01
The Integrated Data Visualization and Virtual Reality Tool (IDVVRT) Phase II effort was for the design and development of an innovative Data Visualization Environment Tool (DVET) for NASA engineers and scientists, enabling them to visualize complex multidimensional and multivariate data in a virtual environment. The objectives of the project were to: (1) demonstrate the transfer and manipulation of standard engineering data in a virtual world; (2) demonstrate the effects of design and changes using finite element analysis tools; and (3) determine the training and engineering design and analysis effectiveness of the visualization system.
VIRTUAL FRAME BUFFER INTERFACE
NASA Technical Reports Server (NTRS)
Wolfe, T. L.
1994-01-01
Large image processing systems use multiple frame buffers with differing architectures and vendor supplied user interfaces. This variety of architectures and interfaces creates software development, maintenance, and portability problems for application programs. The Virtual Frame Buffer Interface program makes all frame buffers appear as a generic frame buffer with a specified set of characteristics, allowing programmers to write code which will run unmodified on all supported hardware. The Virtual Frame Buffer Interface converts generic commands to actual device commands. The virtual frame buffer consists of a definition of capabilities and FORTRAN subroutines that are called by application programs. The virtual frame buffer routines may be treated as subroutines, logical functions, or integer functions by the application program. Routines are included that allocate and manage hardware resources such as frame buffers, monitors, video switches, trackballs, tablets and joysticks; access image memory planes; and perform alphanumeric font or text generation. The subroutines for the various "real" frame buffers are in separate VAX/VMS shared libraries allowing modification, correction or enhancement of the virtual interface without affecting application programs. The Virtual Frame Buffer Interface program was developed in FORTRAN 77 for a DEC VAX 11/780 or a DEC VAX 11/750 under VMS 4.X. It supports ADAGE IK3000, DEANZA IP8500, Low Resolution RAMTEK 9460, and High Resolution RAMTEK 9460 Frame Buffers. It has a central memory requirement of approximately 150K. This program was developed in 1985.
Overstreet, Nicole M.; Quinn, Diane M.; Marsh, Kerry L.
2015-01-01
The current study examined whether exposure to sexually objectifying images in a potential romantic partner's virtual apartment affects discrepancies between people's perception of their own appearance (i.e., self-perceptions) and their perception of the body ideal that is considered desirable to a romantic partner (i.e., partner-ideals). Participants were 114 heterosexual undergraduate students (57 women and 57 men) from a northeastern U.S. university. The study used a 2 (Participant Gender) x 2 (Virtual Environment: Sexualized vs. Non-Sexualized) between-subjects design. We predicted that women exposed to sexually objectifying images in a virtual environment would report greater discrepancies between their self-perceptions and partner-ideals than men, which in turn would contribute to women's body consciousness. Findings support this hypothesis and show that perceived discrepancies account for the relationship between exposure to sexually objectifying images and body consciousness for women but not men. We also found gender asymmetries in objectification responses when each component of perceived discrepancies, i.e., self-perceptions versus perceptions of a romantic partner's body ideal, were examined separately. For men, exposure to muscular sexualized images was significantly associated with their self-perceptions but not their perceptions of the body size that is considered desirable to a romantic partner. For women, exposure to thin sexualized images was significantly associated with their perceptions that a romantic partner preferred a woman with a smaller body size. However, exposure to these images did not affect women's self-perceptions. Implications for gender asymmetries in objectification responses and perceived discrepancies that include a romantic partner's perceptions are discussed. PMID:26594085
Virtual Guidance Ultrasound: A Tool to Obtain Diagnostic Ultrasound for Remote Environments
NASA Technical Reports Server (NTRS)
Caine,Timothy L.; Martin David S.; Matz, Timothy; Lee, Stuart M. C.; Stenger, Michael B.; Platts, Steven H.
2012-01-01
Astronauts currently acquire ultrasound images on the International Space Station with the assistance of real-time remote guidance from an ultrasound expert in Mission Control. Remote guidance will not be feasible when significant communication delays exist during exploration missions beyond low-Earth orbit. For example, there may be as much as a 20- minute delay in communications between the Earth and Mars. Virtual-guidance, a pre-recorded audio-visual tutorial viewed in real-time, is a viable modality for minimally trained scanners to obtain diagnostically-adequate images of clinically relevant anatomical structures in an autonomous manner. METHODS: Inexperienced ultrasound operators were recruited to perform carotid artery (n = 10) and ophthalmic (n = 9) ultrasound examinations using virtual guidance as their only instructional tool. In the carotid group, each each untrained operator acquired two-dimensional, pulsed, and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. RESULTS: Eight of the 10 carotid studies were judged to be diagnostically adequate. With one exception the quality of all the ophthalmic images were adequate to excellent. CONCLUSION: Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by untrained operators with instruction only from an audio/video tutorial viewed in real time while scanning. This form of quick-response-guidance, can be developed for other ultrasound examinations, represents an opportunity to acquire important medical and scientific information for NASA flight surgeons and researchers when trained medical personnel are not present. Further, virtual guidance will allow untrained personnel to autonomously obtain important medical information in remote locations on Earth where communication is difficult or absent.
Kim, Kwanguk; Kim, Chan-Hyung; Cha, Kyung Ryeol; Park, Junyoung; Han, Kiwan; Kim, Yun Ki; Kim, Jae-Jin; Kim, In Young; Kim, Sun I
2008-12-01
The current study is a preliminary test of a virtual reality (VR) anxiety-provoking tool using a sample of participants with obsessive-compulsive disorder (OCD). The tasks were administrated to 33 participants with OCD and 30 healthy control participants. In the VR task, participants navigated through a virtual environment using a joystick and head-mounted display. The virtual environment consisted of three phases: training, distraction, and the main task. After the training and distraction phases, participants were allowed to check (a common OCD behavior) freely, as they would in the real world, and a visual analogy scale of anxiety was recorded during VR. Participants' anxiety in the virtual environment was measured with a validated measure of psychiatric symptoms and functions and analyzed with a VR questionnaire. Results revealed that those with OCD had significantly higher anxiety in the virtual environment than did healthy controls, and the decreased ratio of anxiety in participants with OCD was also higher than that of healthy controls. Moreover, the degree of anxiety of an individual with OCD was positively correlated with a his or her symptom score and immersive tendency score. These results suggest the possibility that VR technology has a value as an anxiety-provoking or treatment tool for OCD.
Application of MR virtual endoscopy in children with hydrocephalus.
Zhao, Cailei; Yang, Jian; Gan, Yungen; Liu, Jiangang; Tan, Zhen; Liang, Guohua; Meng, Xianlei; Sun, Longwei; Cao, Weiguo
2015-12-01
To evaluate the performance of MR virtual endoscopy (MRVE) in children with hydrocephalus. Clinical and imaging data were collected from 15 pediatric patients with hydrocephalus and 15 normal control children. All hydrocephalus patients were confirmed by ventriculoscopy or CT imaging. The cranial 3D-T1 weighted imaging data from fast spoiled gradient echo scan (FSPGR) were transported to working station. VE images of cerebral ventricular cavity were constructed with Navigator software. Cerebral ventricular MRVE can achieve similar results as ventriculoscopy in demonstrating the morphology of ventricular wall or intracavity lesion. In addition, MRVE can observe the lesion from distal end of obstruction, as well as other areas that are inaccessible to ventriculoscopy. MRVE can also reveal the pathological change of ventricular inner wall surface, and help determine patency of the cerebral aqueduct and fourth ventricle outlet. MR virtual endoscopy provides a non-invasive diagnostic modality that can be used as a supplemental approach to ventriculoscopy. However, its sensitivity and specificity need to be determined in the large study. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Zheng, Guoyan
2007-03-01
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Phase-contrast tomography of neuronal tissues: from laboratory- to high resolution synchrotron CT
NASA Astrophysics Data System (ADS)
Töpperwien, Mareike; Krenkel, Martin; Müller, Kristin; Salditt, Tim
2016-10-01
Assessing the three-dimensional architecture of neuronal tissues with sub-cellular resolution presents a significant analytical challenge. Overcoming the limitations associated with serial slicing, phase-contrast x-ray tomography has the potential to contribute to this goal. Even compact laboratory CT at an optimized liquid-metal jet micro- focus source combined with suitable phase-retrieval algorithms and preparation protocols can yield renderings with single cell sensitivity in millimeter sized brain areas of mouse. Here, we show the capabilities of the setup by imaging a Golgi-Cox impregnated mouse brain. Towards higher resolution we extend these studies at our recently upgraded waveguide-based cone-beam holo-tomography instrument GINIX at DESY. This setup allows high resolution recordings with adjustable field of view and resolution, down to the voxel sizes in the range of a few ten nanometers. The recent results make us confident that important issues of neuronal connectivity can be addressed by these methods, and that 3D (virtual) histology with nanoscale resolution will become an attractive modality for neuroscience research.
Electro-optical imaging systems integration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wight, R.
1987-01-01
Since the advent of high resolution, high data rate electronic sensors for military aircraft, the demands on their counterpart, the image generator hard copy output system, have increased dramatically. This has included support of direct overflight and standoff reconnaissance systems and often has required operation within a military shelter or van. The Tactical Laser Beam Recorder (TLBR) design has met the challenge each time. A third generation (TLBR) was designed and two units delivered to rapidly produce high quality wet process imagery on 5-inch film from a 5-sensor digital image signal input. A modular, in-line wet film processor is includedmore » in the total TLBR (W) system. The system features a rugged optical and transport package that requires virtually no alignment or maintenance. It has a ''Scan FIX'' capability which corrects for scanner fault errors and ''Scan LOC'' system which provides for complete phase synchronism isolation between scanner and digital image data input via strobed, 2-line digital buffers. Electronic gamma adjustment automatically compensates for variable film processing time as the film speed changes to track the sensor. This paper describes the fourth meeting of that challenge, the High Resolution Laser Beam Recorder (HRLBR) for Reconnaissance/Tactical applications.« less
NASA Astrophysics Data System (ADS)
Lam, Walter Y. H.; Ngan, Henry Y. T.; Wat, Peter Y. P.; Luk, Henry W. K.; Goto, Tazuko K.; Pow, Edmond H. N.
2015-02-01
Medical radiography is the use of radiation to "see through" a human body without breaching its integrity (surface). With computed tomography (CT)/cone beam computed tomography (CBCT), three-dimensional (3D) imaging can be produced. These imagings not only facilitate disease diagnosis but also enable computer-aided surgical planning/navigation. In dentistry, the common method for transfer of the virtual surgical planning to the patient (reality) is the use of surgical stent either with a preloaded planning (static) like a channel or a real time surgical navigation (dynamic) after registration with fiducial markers (RF). This paper describes using the corner of a cube as a radiopaque fiducial marker on an acrylic (plastic) stent, this RF allows robust calibration and registration of Cartesian (x, y, z)- coordinates for linking up the patient (reality) and the imaging (virtuality) and hence the surgical planning can be transferred in either static or dynamic way. The accuracy of computer-aided implant surgery was measured with reference to coordinates. In our preliminary model surgery, a dental implant was planned virtually and placed with preloaded surgical guide. The deviation of the placed implant apex from the planning was x=+0.56mm [more right], y=- 0.05mm [deeper], z=-0.26mm [more lingual]) which was within clinically 2mm safety range. For comparison with the virtual planning, the physically placed implant was CT/CBCT scanned and errors may be introduced. The difference of the actual implant apex to the virtual apex was x=0.00mm, y=+0.21mm [shallower], z=-1.35mm [more lingual] and this should be brought in mind when interpret the results.
Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.
Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques
2015-04-01
Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.
Visual stimulus presentation using fiber optics in the MRI scanner.
Huang, Ruey-Song; Sereno, Martin I
2008-03-30
Imaging the neural basis of visuomotor actions using fMRI is a topic of increasing interest in the field of cognitive neuroscience. One challenge is to present realistic three-dimensional (3-D) stimuli in the subject's peripersonal space inside the MRI scanner. The stimulus generating apparatus must be compatible with strong magnetic fields and must not interfere with image acquisition. Virtual 3-D stimuli can be generated with a stereo image pair projected onto screens or via binocular goggles. Here, we describe designs and implementations for automatically presenting physical 3-D stimuli (point-light targets) in peripersonal and near-face space using fiber optics in the MRI scanner. The feasibility of fiber-optic based displays was demonstrated in two experiments. The first presented a point-light array along a slanted surface near the body, and the second presented multiple point-light targets around the face. Stimuli were presented using phase-encoded paradigms in both experiments. The results suggest that fiber-optic based displays can be a complementary approach for visual stimulus presentation in the MRI scanner.
Evaluation of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope
NASA Astrophysics Data System (ADS)
Yoshimoto, Kayo; Watabe, Kenji; Fujinaga, Tetsuji; Iijima, Hideki; Tsujii, Masahiko; Takahashi, Hideya; Takehara, Tetsuo; Yamada, Kenji
2017-02-01
Because the view angle of the endoscope is narrow, it is difficult to get the whole image of the digestive tract at once. If there are more than two lesions in the digestive tract, it is hard to understand the 3D positional relationship among the lesions. Virtual endoscopy using CT is a present standard method to get the whole view of the digestive tract. Because the virtual endoscopy is designed to detect the irregularity of the surface, it cannot detect lesions that lack irregularity including early cancer. In this study, we propose a method of endoscopic entire 3D image acquisition of the digestive tract using a stereo endoscope. The method is as follows: 1) capture sequential images of the digestive tract by moving the endoscope, 2) reconstruct 3D surface pattern for each frame by stereo images, 3) estimate the position of the endoscope by image analysis, 4) reconstitute the entire image of the digestive tract by combining the 3D surface pattern. To confirm the validity of this method, we experimented with a straight tube inside of which circles were allocated at equal distance of 20 mm. We captured sequential images and the reconstituted image of the tube revealed that the distance between each circle was 20.2 +/- 0.3 mm (n=7). The results suggest that this method of endoscopic entire 3D image acquisition may help us understand 3D positional relationship among the lesions such as early esophageal cancer that cannot be detected by virtual endoscopy using CT.
NASA Astrophysics Data System (ADS)
Wagner, Martin G.; Strother, Charles M.; Schafer, Sebastian; Mistretta, Charles A.
2016-03-01
Biplane fluoroscopic imaging is an important tool for minimally invasive procedures for the treatment of cerebrovascular diseases. However, finding a good working angle for the C-arms of the angiography system as well as navigating based on the 2D projection images can be a difficult task. The purpose of this work is to propose a novel 4D reconstruction algorithm for interventional devices from biplane fluoroscopy images and to propose new techniques for a better visualization of the results. The proposed reconstruction methods binarizes the fluoroscopic images using a dedicated noise reduction algorithm for curvilinear structures and a global thresholding approach. A topology preserving thinning algorithm is then applied and a path search algorithm minimizing the curvature of the device is used to extract the 2D device centerlines. Finally, the 3D device path is reconstructed using epipolar geometry. The point correspondences are determined by a monotonic mapping function that minimizes the reconstruction error. The three dimensional reconstruction of the device path allows the rendering of virtual fluoroscopy images from arbitrary angles as well as 3D visualizations like virtual endoscopic views or glass pipe renderings, where the vessel wall is rendered with a semi-transparent material. This work also proposes a combination of different visualization techniques in order to increase the usability and spatial orientation for the user. A combination of synchronized endoscopic and glass pipe views is proposed, where the virtual endoscopic camera position is determined based on the device tip location as well as the previous camera position using a Kalman filter in order to create a smooth path. Additionally, vessel centerlines are displayed and the path to the target is highlighted. Finally, the virtual endoscopic camera position is also visualized in the glass pipe view to further improve the spatial orientation. The proposed techniques could considerably improve the workflow of minimally invasive procedures for the treatment of cerebrovascular diseases.
NASA Astrophysics Data System (ADS)
Ikeda, Sei; Sato, Tomokazu; Kanbara, Masayuki; Yokoya, Naokazu
2004-05-01
Technology that enables users to experience a remote site virtually is called telepresence. A telepresence system using real environment images is expected to be used in the field of entertainment, medicine, education and so on. This paper describes a novel telepresence system which enables users to walk through a photorealistic virtualized environment by actual walking. To realize such a system, a wide-angle high-resolution movie is projected on an immersive multi-screen display to present users the virtualized environments and a treadmill is controlled according to detected user's locomotion. In this study, we use an omnidirectional multi-camera system to acquire images real outdoor scene. The proposed system provides users with rich sense of walking in a remote site.
Gigapixel imaging as a resource for geoscience teaching, research, and outreach
NASA Astrophysics Data System (ADS)
Bentley, C.; Pitts, A.; Rohrback, R. C.; Dudek, M.
2015-12-01
The Mid-Atlantic Geo-Image Collection is a repository of gigapixel-resolution geologic imagery intended as a tool for geoscience professionals, educators, students, & researchers (http://gigapan.com/groups/100/galleries). GigaPan provides a unique combination of context & detail, with images that maintain a high level of resolution through every level of magnification. Using geological GigaPans, physically disabled students can participate in virtual field trips, instructors can bring inaccessible outcrops into the classroom, & students can zoom in on hand samples without expensive microscopes. Because GigaPan images permit detailed visual examination of geologic, MAGIC is particularly suitable for use in online geology courses. The images are free to use and tag. Our 10 contributors (3 faculty, 2 graduate students, & 6 undergraduates) use 4 models of mobile robot cameras (outcrop/landscape), 2 laboratory-based GIGAmacro imaging systems (hand samples) & 2 experimental units: 1 for thin sections, 1 for GigaPans of scanning electron microscopy. Each of these has strengths & weaknesses. MAGIC has suites of images of Appalachian structure & stratigraphy, Rocky Mountains, Snowball Earth hypothesis, & doomed outcrops of Miocene strata on Chesapeake Bay. Virtual field trips with our imagery have been developed for: Billy Goat Trail, MD; Helen Lake, AB; Wind River Canyon, WY; the Canadian Rockies; El Paso, TX; glaciation around the world; and Corridor H, WV (a GSA field trip in Nov. 2015). Virtual sample sets have been developed for introductory minerals, igneous, sedimentary, & metamorphic rocks, the stratigraphy of VA's physiographic provinces, & the Snowball Earth hypothesis. The virtual field trips have been tested in both online & onsite courses. There are close to a thousand images in the collection, each averaging about 0.9 gigapixels in size, with close to 900,000 views total. A new viewer for GigaPans was released this year by GIGAmacro. This new viewer allows measurement and calibration, automatically resizing scale bars, side-by-side comparisons between 2 images, overlapping presentation of 2 images, & annotation by users. Comparative viewers are particularly useful for the presentation of before/after imagery; raw vs. annotated imagery, & polarized views of thin sections.
fVisiOn: glasses-free tabletop 3D display to provide virtual 3D media naturally alongside real media
NASA Astrophysics Data System (ADS)
Yoshida, Shunsuke
2012-06-01
A novel glasses-free tabletop 3D display, named fVisiOn, floats virtual 3D objects on an empty, flat, tabletop surface and enables multiple viewers to observe raised 3D images from any angle at 360° Our glasses-free 3D image reproduction method employs a combination of an optical device and an array of projectors and produces continuous horizontal parallax in the direction of a circular path located above the table. The optical device shapes a hollow cone and works as an anisotropic diffuser. The circularly arranged projectors cast numerous rays into the optical device. Each ray represents a particular ray that passes a corresponding point on a virtual object's surface and orients toward a viewing area around the table. At any viewpoint on the ring-shaped viewing area, both eyes collect fractional images from different projectors, and all the viewers around the table can perceive the scene as 3D from their perspectives because the images include binocular disparity. The entire principle is installed beneath the table, so the tabletop area remains clear. No ordinary tabletop activities are disturbed. Many people can naturally share the 3D images displayed together with real objects on the table. In our latest prototype, we employed a handmade optical device and an array of over 100 tiny projectors. This configuration reproduces static and animated 3D scenes for a 130° viewing area and allows 5-cm-tall virtual characters to play soccer and dance on the table.
The Adaptive Effects Of Virtual Interfaces: Vestibulo-Ocular Reflex and Simulator Sickness.
1998-08-07
rearrangement: a pattern of stimulation differing from that existing as a result of normal interactions with the real world. Stimulus rearrangements can...is immersive and interactive . virtual interface: a system of transducers, signal processors, computer hardware and software that create an... interactive medium through which: 1) information is transmitted to the senses in the form of two- and three dimensional virtual images and 2) psychomotor
Rocha, Rafael; Vassallo, José; Soares, Fernando; Miller, Keith; Gobbi, Helenice
2009-01-01
In the last few years, telepathology has benefited from the progress in the technology of image digitalization and transmission through the world web. The applications of telepathology and virtual imaging are more current in research and morphology teaching. In surgical pathology daily practice, this technology still has limits and is more often used for case consultation. In the present review, we intend to discuss its applications and challenges for pathologists and scientists. Much of the limitations of virtual imaging for the surgical pathologist reside in the capacity of storage of images, which so far has hindered the more widespread use of this technology. Overcoming this major drawback may revolutionize the surgical pathologist's activity and slide storing.
Dual energy computed tomography for the head.
Naruto, Norihito; Itoh, Toshihide; Noguchi, Kyo
2018-02-01
Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three-material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducing beam-hardening effects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualize ischemic stroke in the brain without using iodine contrast medium.
Hybrid Cloud Computing Environment for EarthCube and Geoscience Community
NASA Astrophysics Data System (ADS)
Yang, C. P.; Qin, H.
2016-12-01
The NSF EarthCube Integration and Test Environment (ECITE) has built a hybrid cloud computing environment to provides cloud resources from private cloud environments by using cloud system software - OpenStack and Eucalyptus, and also manages public cloud - Amazon Web Service that allow resource synchronizing and bursting between private and public cloud. On ECITE hybrid cloud platform, EarthCube and geoscience community can deploy and manage the applications by using base virtual machine images or customized virtual machines, analyze big datasets by using virtual clusters, and real-time monitor the virtual resource usage on the cloud. Currently, a number of EarthCube projects have deployed or started migrating their projects to this platform, such as CHORDS, BCube, CINERGI, OntoSoft, and some other EarthCube building blocks. To accomplish the deployment or migration, administrator of ECITE hybrid cloud platform prepares the specific needs (e.g. images, port numbers, usable cloud capacity, etc.) of each project in advance base on the communications between ECITE and participant projects, and then the scientists or IT technicians in those projects launch one or multiple virtual machines, access the virtual machine(s) to set up computing environment if need be, and migrate their codes, documents or data without caring about the heterogeneity in structure and operations among different cloud platforms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fitzpatrick, David; St Luke's Hospital, Dublin; Grabarz, Daniel
Purpose: The purpose of this study was to assess the accuracy of a virtual consultation (VC) process in determining treatment strategy for patients with malignant epidural spinal cord compression (MESCC). Methods and Materials: A prospective clinical database was maintained for patients with MESCC. A virtual consultation process (involving exchange of key predetermined clinical information and diagnostic imaging) facilitated rapid decision-making between oncologists and spinal surgeons. Diagnostic imaging was reviewed retrospectively (by R.R.) for surgical opinions in all patients. The primary outcome was the accuracy of virtual consultation opinion in predicting the final treatment recommendation. Results: After excluding 20 patients whomore » were referred directly to the spinal surgeon, 125 patients were eligible for virtual consultation. Of the 46 patients who had a VC, surgery was recommended in 28 patients and actually given to 23. A retrospective review revealed that 5/79 patients who did not have a VC would have been considered surgical candidates. The overall accuracy of the virtual consultation process was estimated at 92%. Conclusion: The VC process for MESCC patients provides a reliable means of arriving at a multidisciplinary opinion while minimizing patient transfer. This can potentially shorten treatment decision time and enhance clinical outcomes.« less
Virtual endoscopy in neurosurgery: a review.
Neubauer, André; Wolfsberger, Stefan
2013-01-01
Virtual endoscopy is the computerized creation of images depicting the inside of patient anatomy reconstructed in a virtual reality environment. It permits interactive, noninvasive, 3-dimensional visual inspection of anatomical cavities or vessels. This can aid in diagnostics, potentially replacing an actual endoscopic procedure, and help in the preparation of a surgical intervention by bridging the gap between plain 2-dimensional radiologic images and the 3-dimensional depiction of anatomy during actual endoscopy. If not only the endoscopic vision but also endoscopic handling, including realistic haptic feedback, is simulated, virtual endoscopy can be an effective training tool for novice surgeons. In neurosurgery, the main fields of the application of virtual endoscopy are third ventriculostomy, endonasal surgery, and the evaluation of pathologies in cerebral blood vessels. Progress in this very active field of research is achieved through cooperation between the technical and the medical communities. While the technology advances and new methods for modeling, reconstruction, and simulation are being developed, clinicians evaluate existing simulators, steer the development of new ones, and explore new fields of application. This review introduces some of the most interesting virtual reality systems for endoscopic neurosurgery developed in recent years and presents clinical studies conducted either on areas of application or specific systems. In addition, benefits and limitations of single products and simulated neuroendoscopy in general are pointed out.
Retinal image quality and visual stimuli processing by simulation of partial eye cataract
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Danilenko, Olga; Zavjalova, Varvara
2016-10-01
Visual stimuli were demonstrated on a 4.3'' mobile phone screen inside a "Virtual Reality" adapter that allowed separation of the left and right eye visual fields. Contrast of the retina image thus can be controlled by the image on the phone screen and parallel to that at appropriate geometry by the AC voltage applied to scattering PDLC cell inside the adapter. Such optical pathway separation allows to demonstrate to both eyes spatially variant images, that after visual binocular fusion acquire their characteristic indications. As visual stimuli we used grey and different color (two opponent components to vision - red-green in L*a*b* color space) spatially periodical stimuli for left and right eyes; and with spatial content that by addition or subtraction resulted as clockwise or counter clockwise slanted Gabor gratings. We performed computer modeling with numerical addition or subtraction of signals similar to processing in brain via stimuli input decomposition in luminance and color opponency components. It revealed the dependence of the perception psychophysical equilibrium point between clockwise or counter clockwise perception of summation on one eye image contrast and color saturation, and on the strength of the retinal aftereffects. Existence of a psychophysical equilibrium point in perception of summation is only in the presence of a prior adaptation to a slanted periodical grating and at the appropriate slant orientation of adaptation grating and/or at appropriate spatial grating pattern phase according to grating nods. Actual observer perception experiments when one eye images were deteriorated by simulated cataract approved the shift of mentioned psychophysical equilibrium point on the degree of artificial cataract. We analyzed also the mobile devices stimuli emission spectra paying attention to areas sensitive to macula pigments absorption spectral maxima and blue areas where the intense irradiation can cause in abnormalities in periodic melatonin regeneration and deviations in regular circadian rhythms. Therefore participants in vision studies using "Virtual Reality" appliances with fixed vision fields and emitting a spike liked spectral bands (on basis of OLED and AMOLED diodes) different from spectra of ambient illuminators should be accordingly warned about potential health risks.
High-Performance Tiled WMS and KML Web Server
NASA Technical Reports Server (NTRS)
Plesea, Lucian
2007-01-01
This software is an Apache 2.0 module implementing a high-performance map server to support interactive map viewers and virtual planet client software. It can be used in applications that require access to very-high-resolution geolocated images, such as GIS, virtual planet applications, and flight simulators. It serves Web Map Service (WMS) requests that comply with a given request grid from an existing tile dataset. It also generates the KML super-overlay configuration files required to access the WMS image tiles.
Fast-response LCDs for virtual reality applications
NASA Astrophysics Data System (ADS)
Chen, Haiwei; Peng, Fenglin; Gou, Fangwang; Wand, Michael; Wu, Shin-Tson
2017-02-01
We demonstrate a fast-response liquid crystal display (LCD) with an ultra-low-viscosity nematic LC mixture. The measured average motion picture response time is only 6.88 ms, which is comparable to 6.66 ms for an OLED at a 120 Hz frame rate. If we slightly increase the TFT frame rate and/or reduce the backlight duty ratio, image blurs can be further suppressed to unnoticeable level. Potential applications of such an image-blur-free LCD for virtual reality, gaming monitors, and TVs are foreseeable.
Using Virtual Observatory Services in Sky View
NASA Technical Reports Server (NTRS)
McGlynn, Thomas A.
2007-01-01
For over a decade Skyview has provided astronomers and the public with easy access to survey and imaging data from all wavelength regimes. SkyView has pioneered many of the concepts that underlie the Virtual Observatory. Recently SkyView has been released as a distributable package which uses VO protocols to access image and catalog services. This chapter describes how to use the Skyview as a local service and how to customize it to access additional VO services and local data.
Drape simulation and subjective assessment of virtual drape
NASA Astrophysics Data System (ADS)
Buyukaslan, E.; Kalaoglu, F.; Jevsnik, S.
2017-10-01
In this study, a commercial 3D virtual garment simulation software (Optitex) is used to simulate drape behaviours of five different fabrics. Mechanical properties of selected fabrics are measured by Fabric Assurance by Simple Testing (FAST) method. Measured bending, shear and extension properties of fabrics are inserted to the simulation software to achieve more realistic simulations. Simulation images of fabrics are shown to 27 people and they are asked to match real drape images of fabrics with simulated drape images. Fabric simulations of two fabrics were correctly matched by the majority of the test group. However, the other three fabrics’ simulations were mismatched by most of the people.
Three-dimensional image signals: processing methods
NASA Astrophysics Data System (ADS)
Schiopu, Paul; Manea, Adrian; Craciun, Anca-Ileana; Craciun, Alexandru
2010-11-01
Over the years extensive studies have been carried out to apply coherent optics methods in real-time processing, communications and transmission image. This is especially true when a large amount of information needs to be processed, e.g., in high-resolution imaging. The recent progress in data-processing networks and communication systems has considerably increased the capacity of information exchange. We describe the results of literature investigation research of processing methods for the signals of the three-dimensional images. All commercially available 3D technologies today are based on stereoscopic viewing. 3D technology was once the exclusive domain of skilled computer-graphics developers with high-end machines and software. The images capture from the advanced 3D digital camera can be displayed onto screen of the 3D digital viewer with/ without special glasses. For this is needed considerable processing power and memory to create and render the complex mix of colors, textures, and virtual lighting and perspective necessary to make figures appear three-dimensional. Also, using a standard digital camera and a technique called phase-shift interferometry we can capture "digital holograms." These are holograms that can be stored on computer and transmitted over conventional networks. We present some research methods to process "digital holograms" for the Internet transmission and results.
Peng, Hanchuan; Tang, Jianyong; Xiao, Hang; Bria, Alessandro; Zhou, Jianlong; Butler, Victoria; Zhou, Zhi; Gonzalez-Bellido, Paloma T; Oh, Seung W; Chen, Jichao; Mitra, Ananya; Tsien, Richard W; Zeng, Hongkui; Ascoli, Giorgio A; Iannello, Giulio; Hawrylycz, Michael; Myers, Eugene; Long, Fuhui
2014-07-11
Three-dimensional (3D) bioimaging, visualization and data analysis are in strong need of powerful 3D exploration techniques. We develop virtual finger (VF) to generate 3D curves, points and regions-of-interest in the 3D space of a volumetric image with a single finger operation, such as a computer mouse stroke, or click or zoom from the 2D-projection plane of an image as visualized with a computer. VF provides efficient methods for acquisition, visualization and analysis of 3D images for roundworm, fruitfly, dragonfly, mouse, rat and human. Specifically, VF enables instant 3D optical zoom-in imaging, 3D free-form optical microsurgery, and 3D visualization and annotation of terabytes of whole-brain image volumes. VF also leads to orders of magnitude better efficiency of automated 3D reconstruction of neurons and similar biostructures over our previous systems. We use VF to generate from images of 1,107 Drosophila GAL4 lines a projectome of a Drosophila brain.
Efficient threshold for volumetric segmentation
NASA Astrophysics Data System (ADS)
Burdescu, Dumitru D.; Brezovan, Marius; Stanescu, Liana; Stoica Spahiu, Cosmin; Ebanca, Daniel
2015-07-01
Image segmentation plays a crucial role in effective understanding of digital images. However, the research on the existence of general purpose segmentation algorithm that suits for variety of applications is still very much active. Among the many approaches in performing image segmentation, graph based approach is gaining popularity primarily due to its ability in reflecting global image properties. Volumetric image segmentation can simply result an image partition composed by relevant regions, but the most fundamental challenge in segmentation algorithm is to precisely define the volumetric extent of some object, which may be represented by the union of multiple regions. The aim in this paper is to present a new method to detect visual objects from color volumetric images and efficient threshold. We present a unified framework for volumetric image segmentation and contour extraction that uses a virtual tree-hexagonal structure defined on the set of the image voxels. The advantage of using a virtual tree-hexagonal network superposed over the initial image voxels is that it reduces the execution time and the memory space used, without losing the initial resolution of the image.
Developing a Virtual Rock Deformation Laboratory
NASA Astrophysics Data System (ADS)
Zhu, W.; Ougier-simonin, A.; Lisabeth, H. P.; Banker, J. S.
2012-12-01
Experimental rock physics plays an important role in advancing earthquake research. Despite its importance in geophysics, reservoir engineering, waste deposits and energy resources, most geology departments in U.S. universities don't have rock deformation facilities. A virtual deformation laboratory can serve as an efficient tool to help geology students naturally and internationally learn about rock deformation. Working with computer science engineers, we built a virtual deformation laboratory that aims at fostering user interaction to facilitate classroom and outreach teaching and learning. The virtual lab is built to center around a triaxial deformation apparatus in which laboratory measurements of mechanical and transport properties such as stress, axial and radial strains, acoustic emission activities, wave velocities, and permeability are demonstrated. A student user can create her avatar to enter the virtual lab. In the virtual lab, the avatar can browse and choose among various rock samples, determine the testing conditions (pressure, temperature, strain rate, loading paths), then operate the virtual deformation machine to observe how deformation changes physical properties of rocks. Actual experimental results on the mechanical, frictional, sonic, acoustic and transport properties of different rocks at different conditions are compiled. The data acquisition system in the virtual lab is linked to the complied experimental data. Structural and microstructural images of deformed rocks are up-loaded and linked to different deformation tests. The integration of the microstructural image and the deformation data allows the student to visualize how forces reshape the structure of the rock and change the physical properties. The virtual lab is built using the Game Engine. The geological background, outstanding questions related to the geological environment, and physical and mechanical concepts associated with the problem will be illustrated on the web portal. In addition, some web based data collection tools are available to collect student feedback and opinions on their learning experience. The virtual laboratory is designed to be an online education tool that facilitates interactive learning.; Virtual Deformation Laboratory
de Leng, Bas A; Dolmans, Diana H J M; Muijtjens, Arno M M; van der Vleuten, Cees P M
2006-06-01
To investigate the effects of a virtual learning environment (VLE) on group interaction and consultation of information resources during the preliminary phase, self-study phase and reporting phase of the problem-based learning process in an undergraduate medical curriculum. A questionnaire was administered to 355 medical students in Years 1 and 2 to ask them about the perceived usefulness of a virtual learning environment that was created with Blackboard for group interaction and the use of learning resources. The students indicated that the VLE supported face-to-face interaction in the preliminary discussion and in the reporting phase but did not stimulate computer-mediated distance interaction during the self-study phase. They perceived that the use of multimedia in case presentations led to a better quality of group discussion than if case presentations were exclusively text-based. They also indicated that the information resources that were hyperlinked in the VLE stimulated the consultation of these resources during self-study, but not during the reporting phase. Students indicated that the use of a VLE in the tutorial room and the inclusion of multimedia in case presentations supported processes of active learning in the tutorial groups. However, if we want to exploit the full potential of asynchronous computer-mediated communication to initiate in-depth discussion during the self-study phase, its application will have to be selective and deliberate. Students indicated that the links in the VLE to selected information in library repositories supported their learning.
Kobayashi, Hajime; Ohkubo, Masaki; Narita, Akihiro; Marasinghe, Janaka C; Murao, Kohei; Matsumoto, Toru; Sone, Shusuke
2017-01-01
Objective: We propose the application of virtual nodules to evaluate the performance of computer-aided detection (CAD) of lung nodules in cancer screening using low-dose CT. Methods: The virtual nodules were generated based on the spatial resolution measured for a CT system used in an institution providing cancer screening and were fused into clinical lung images obtained at that institution, allowing site specificity. First, we validated virtual nodules as an alternative to artificial nodules inserted into a phantom. In addition, we compared the results of CAD analysis between the real nodules (n = 6) and the corresponding virtual nodules. Subsequently, virtual nodules of various sizes and contrasts between nodule density and background density (ΔCT) were inserted into clinical images (n = 10) and submitted for CAD analysis. Results: In the validation study, 46 of 48 virtual nodules had the same CAD results as artificial nodules (kappa coefficient = 0.913). Real nodules and the corresponding virtual nodules showed the same CAD results. The detection limits of the tested CAD system were determined in terms of size and density of peripheral lung nodules; we demonstrated that a nodule with a 5-mm diameter was detected when the nodule had a ΔCT > 220 HU. Conclusion: Virtual nodules are effective in evaluating CAD performance using site-specific scan/reconstruction conditions. Advances in knowledge: Virtual nodules can be an effective means of evaluating site-specific CAD performance. The methodology for guiding the detection limit for nodule size/density might be a useful evaluation strategy. PMID:27897029
Virtual Sensors: Using Data Mining Techniques to Efficiently Estimate Remote Sensing Spectra
NASA Technical Reports Server (NTRS)
Srivastava, Ashok N.; Oza, Nikunj; Stroeve, Julienne
2004-01-01
Various instruments are used to create images of the Earth and other objects in the universe in a diverse set of wavelength bands with the aim of understanding natural phenomena. These instruments are sometimes built in a phased approach, with some measurement capabilities being added in later phases. In other cases, there may not be a planned increase in measurement capability, but technology may mature to the point that it offers new measurement capabilities that were not available before. In still other cases, detailed spectral measurements may be too costly to perform on a large sample. Thus, lower resolution instruments with lower associated cost may be used to take the majority of measurements. Higher resolution instruments, with a higher associated cost may be used to take only a small fraction of the measurements in a given area. Many applied science questions that are relevant to the remote sensing community need to be addressed by analyzing enormous amounts of data that were generated from instruments with disparate measurement capability. This paper addresses this problem by demonstrating methods to produce high accuracy estimates of spectra with an associated measure of uncertainty from data that is perhaps nonlinearly correlated with the spectra. In particular, we demonstrate multi-layer perceptrons (MLPs), Support Vector Machines (SVMs) with Radial Basis Function (RBF) kernels, and SVMs with Mixture Density Mercer Kernels (MDMK). We call this type of an estimator a Virtual Sensor because it predicts, with a measure of uncertainty, unmeasured spectral phenomena.
New approaches to virtual environment surgery
NASA Technical Reports Server (NTRS)
Ross, M. D.; Twombly, A.; Lee, A. W.; Cheng, R.; Senger, S.
1999-01-01
This research focused on two main problems: 1) low cost, high fidelity stereoscopic imaging of complex tissues and organs; and 2) virtual cutting of tissue. A further objective was to develop these images and virtual tissue cutting methods for use in a telemedicine project that would connect remote sites using the Next Generation Internet. For goal one we used a CT scan of a human heart, a desktop PC with an OpenGL graphics accelerator card, and LCD stereoscopic glasses. Use of multiresolution meshes ranging from approximately 1,000,000 to 20,000 polygons speeded interactive rendering rates enormously while retaining general topography of the dataset. For goal two, we used a CT scan of an infant skull with premature closure of the right coronal suture, a Silicon Graphics Onyx workstation, a Fakespace Immersive WorkBench and CrystalEyes LCD glasses. The high fidelity mesh of the skull was reduced from one million to 50,000 polygons. The cut path was automatically calculated as the shortest distance along the mesh between a small number of hand selected vertices. The region outlined by the cut path was then separated from the skull and translated/rotated to assume a new position. The results indicate that widespread high fidelity imaging in virtual environment is possible using ordinary PC capabilities if appropriate mesh reduction methods are employed. The software cutting tool is applicable to heart and other organs for surgery planning, for training surgeons in a virtual environment, and for telemedicine purposes.
Wellenberg, R H H; Boomsma, M F; van Osch, J A C; Vlassenbroek, A; Milles, J; Edens, M A; Streekstra, G J; Slump, C H; Maas, M
2017-03-01
To quantify the impact of prosthesis material and design on the reduction of metal artefacts in total hip arthroplasties using virtual monochromatic dual-layer detector Spectral CT imaging. The water-filled total hip arthroplasty phantom was scanned on a novel 128-slice Philips IQon dual-layer detector Spectral CT scanner at 120-kVp and 140-kVp at a standard computed tomography dose index of 20.0mGy. Several unilateral and bilateral hip prostheses consisting of different metal alloys were inserted and combined which were surrounded by 18 hydroxyapatite calcium carbonate pellets representing bone. Images were reconstructed with iterative reconstruction and analysed at monochromatic energies ranging from 40 to 200keV. CT numbers in Hounsfield Units (HU), noise measured as the standard deviation in HU, signal-to-noise-ratios (SNRs) and contrast-to-noise-ratios (CNRs) were analysed within fixed regions-of-interests placed in and around the pellets. In 70 and 74keV virtual monochromatic images the CT numbers of the pellets were similar to 120-kVp and 140-kVp polychromatic results, therefore serving as reference. A separation into three categories of metal artefacts was made (no, mild/moderate and severe) where pellets were categorized based on HU deviations. At high keV values overall image contrast was reduced. For mild/moderate artefacts, the highest average CNRs were attained with virtual monochromatic 130keV images, acquired at 140-kVp. Severe metal artefacts were not reduced. In 130keV images, only mild/moderate metal artefacts were significantly reduced compared to 70 and 74keV images. Deviations in CT numbers, noise, SNRs and CNRs due to metal artefacts were decreased with respectively 64%, 57%, 62% and 63% (p<0.001) compared to unaffected pellets. Optimal keVs, based on CNRs, for different unilateral and bilateral metal hip prostheses consisting of different metal alloys varied from 74 to 150keV. The Titanium alloy resulted in less severe artefacts and were reduced more effectively compared to the Cobalt alloy. Virtual monochromatic dual-layer Spectral CT imaging results in a significant reduction of streak artefacts produced by beam-hardening in mild and moderate artefacts by improving CT number accuracy, SNRs and CNRs, while decreasing noise values in a total hip arthroplasty phantom. An optimal monochromatic energy of 130keV was found ranging from 74keV to 150keV for different unilateral and bilateral hip prostheses consisting of different metal alloys. Copyright © 2017 Elsevier Ireland Ltd. All rights reserved.
Lahiani, Amal; Klaiman, Eldad; Grimm, Oliver
2018-01-01
Context: Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E) as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. Aims, Settings and Design: To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. Subjects and Methods: In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Statistical Analysis Used: Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. Results: It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Conclusions: Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures based on multiplexed fluorescence images. PMID:29531846
Lahiani, Amal; Klaiman, Eldad; Grimm, Oliver
2018-01-01
Medical diagnosis and clinical decisions rely heavily on the histopathological evaluation of tissue samples, especially in oncology. Historically, classical histopathology has been the gold standard for tissue evaluation and assessment by pathologists. The most widely and commonly used dyes in histopathology are hematoxylin and eosin (H&E) as most malignancies diagnosis is largely based on this protocol. H&E staining has been used for more than a century to identify tissue characteristics and structures morphologies that are needed for tumor diagnosis. In many cases, as tissue is scarce in clinical studies, fluorescence imaging is necessary to allow staining of the same specimen with multiple biomarkers simultaneously. Since fluorescence imaging is a relatively new technology in the pathology landscape, histopathologists are not used to or trained in annotating or interpreting these images. To allow pathologists to annotate these images without the need for additional training, we designed an algorithm for the conversion of fluorescence images to brightfield H&E images. In this algorithm, we use fluorescent nuclei staining to reproduce the hematoxylin information and natural tissue autofluorescence to reproduce the eosin information avoiding the necessity to specifically stain the proteins or intracellular structures with an additional fluorescence stain. Our method is based on optimizing a transform function from fluorescence to H&E images using least mean square optimization. It results in high quality virtual H&E digital images that can easily and efficiently be analyzed by pathologists. We validated our results with pathologists by making them annotate tumor in real and virtual H&E whole slide images and we obtained promising results. Hence, we provide a solution that enables pathologists to assess tissue and annotate specific structures based on multiplexed fluorescence images.
Braids and phase gates through high-frequency virtual tunneling of Majorana zero modes
NASA Astrophysics Data System (ADS)
Gorantla, Pranay; Sensarma, Rajdeep
2018-05-01
Braiding of non-Abelian Majorana anyons is a first step towards using them in quantum computing. We propose a protocol for braiding Majorana zero modes formed at the edges of nanowires with strong spin-orbit coupling and proximity-induced superconductivity. Our protocol uses high-frequency virtual tunneling between the ends of the nanowires in a trijunction, which leads to an effective low-frequency coarse-grained dynamics for the system, to perform the braid. The braiding operation is immune to amplitude noise in the drives and depends only on relative phase between the drives, which can be controlled by the usual phase-locking techniques. We also show how a phase gate, which is necessary for universal quantum computation, can be implemented with our protocol.
Yunus, Mahira
2012-11-01
To study the use of helical computed tomography 2-D and 3-D images, and virtual endoscopy in the evaluation of airway disease in neonates, infants and children and its value in lesion detection, characterisation and extension. Conducted at Al-Noor Hospital, Makkah, Saudi Arabia, from January 1 to June 30, 2006, the study comprised of 40 patients with strider, having various causes of airway obstruction. They were examined by helical CT scan with 2-D and 3-D reconstructions and virtual endoscopy. The level and characterisation of lesions were carried out and results were compared with actual endoscopic findings. Conventional endoscopy was chosen as the gold standard, and the evaluation of endoscopy was done in terms of sensitivity and specificity of the procedure. For statistical purposes, SPSS version 10 was used. All CT methods detected airway stenosis or obstruction. Accuracy was 98% (n=40) for virtual endoscopy, 96% (n=48) for 3-D external rendering, 90% (n=45) for multiplanar reconstructions and 86% (n=43) for axial images. Comparing the results of 3-D internal and external volume rendering images with conventional endoscopy for detection and grading of stenosis were closer than with 2-D minimum intensity multiplanar reconstruction and axial CT slices. Even high-grade stenosis could be evaluated with virtual endoscope through which conventional endoscope cannot be passed. A case of 4-year-old patient with tracheomalacia could not be diagnosed by helical CT scan and virtual bronchoscopy which was diagriosed on conventional endoscopy and needed CT scan in inspiration and expiration. Virtual endoscopy [VE] enabled better assessment of stenosis compared to the reading of 3-D external rendering, 2-D multiplanar reconstruction [MPR] or axial slices. It can replace conventional endoscopy in the assessment of airway disease without any additional risk.
C-arm positioning using virtual fluoroscopy for image-guided surgery
NASA Astrophysics Data System (ADS)
de Silva, T.; Punnoose, J.; Uneri, A.; Goerres, J.; Jacobson, M.; Ketcha, M. D.; Manbachi, A.; Vogt, S.; Kleinszig, G.; Khanna, A. J.; Wolinsky, J.-P.; Osgood, G.; Siewerdsen, J. H.
2017-03-01
Introduction: Fluoroscopically guided procedures often involve repeated acquisitions for C-arm positioning at the cost of radiation exposure and time in the operating room. A virtual fluoroscopy system is reported with the potential of reducing dose and time spent in C-arm positioning, utilizing three key advances: robust 3D-2D registration to a preoperative CT; real-time forward projection on GPU; and a motorized mobile C-arm with encoder feedback on C-arm orientation. Method: Geometric calibration of the C-arm was performed offline in two rotational directions (orbit α, orbit β). Patient registration was performed using image-based 3D-2D registration with an initially acquired radiograph of the patient. This approach for patient registration eliminated the requirement for external tracking devices inside the operating room, allowing virtual fluoroscopy using commonly available systems in fluoroscopically guided procedures within standard surgical workflow. Geometric accuracy was evaluated in terms of projection distance error (PDE) in anatomical fiducials. A pilot study was conducted to evaluate the utility of virtual fluoroscopy to aid C-arm positioning in image guided surgery, assessing potential improvements in time, dose, and agreement between the virtual and desired view. Results: The overall geometric accuracy of DRRs in comparison to the actual radiographs at various C-arm positions was PDE (mean ± std) = 1.6 ± 1.1 mm. The conventional approach required on average 8.0 ± 4.5 radiographs spent "fluoro hunting" to obtain the desired view. Positioning accuracy improved from 2.6o ± 2.3o (in α) and 4.1o ± 5.1o (in β) in the conventional approach to 1.5o ± 1.3o and 1.8o ± 1.7o, respectively, with the virtual fluoroscopy approach. Conclusion: Virtual fluoroscopy could improve accuracy of C-arm positioning and save time and radiation dose in the operating room. Such a system could be valuable to training of fluoroscopy technicians as well as intraoperative use in fluoroscopically guided procedures.
Zinser, Max J; Sailer, Hermann F; Ritter, Lutz; Braumann, Bert; Maegele, Marc; Zöller, Joachim E
2013-12-01
Advances in computers and imaging have permitted the adoption of 3-dimensional (3D) virtual planning protocols in orthognathic surgery, which may allow a paradigm shift when the virtual planning can be transferred properly. The purpose of this investigation was to compare the versatility and precision of innovative computer-aided designed and computer-aided manufactured (CAD/CAM) surgical splints, intraoperative navigation, and "classic" intermaxillary occlusal splints for surgical transfer of virtual orthognathic planning. The protocols consisted of maxillofacial imaging, diagnosis, virtual orthognathic planning, and surgical planning transfer using newly designed CAD/CAM splints (approach A), navigation (approach B), and intermaxillary occlusal splints (approach C). In this prospective observational study, all patients underwent bimaxillary osteotomy. Eight patients were treated using approach A, 10 using approach B, and 12 using approach C. These techniques were evaluated by applying 13 hard and 7 soft tissue parameters to compare the virtual orthognathic planning (T0) with the postoperative result (T1) using 3D cephalometry and image fusion (ΔT1 vs T0). The highest precision (ΔT1 vs T0) for the maxillary planning transfer was observed with CAD/CAM splints (<0.23 mm; P > .05) followed by surgical "waferless" navigation (<0.61 mm, P < .05) and classic intermaxillary occlusal splints (<1.1 mm; P < .05). Only the innovative CAD/CAM splints kept the condyles in their central position in the temporomandibular joint. However, no technique enables a precise prediction of the mandible and soft tissue. CAD/CAM splints and surgical navigation provide a reliable, innovative, and precise approach for the transfer of virtual orthognathic planning. These computer-assisted techniques may offer an alternate approach to the use of classic intermaxillary occlusal splints. Copyright © 2013 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Central pathology review for phase III clinical trials: the enabling effect of virtual microscopy.
Mroz, Pawel; Parwani, Anil V; Kulesza, Piotr
2013-04-01
Central pathology review (CPR) was initially designed as a quality control measure. The potential of CPR in clinical trials was recognized as early as in the 1960s and quickly became embedded as an integral part of many clinical trials since. To review the current experience with CPR in clinical trials, to summarize current developments in virtual microscopy, and to discuss the potential advantages and disadvantages of this technology in the context of CPR. A PubMed (US National Library of Medicine) search for published studies was conducted, and the relevant articles were reviewed, accompanied by the authors' experience at their practicing institution. The review of the available literature strongly suggests the growing importance of CPR both in the clinical trial setting as well as in second opinion cases. However, the currently applied approach significantly impedes efficient transfer of slides and patient data. Recent advances in imaging, digital microscopy, and Internet technologies suggest that the CPR process may be dramatically streamlined in the foreseeable future to allow for better diagnosis and quality assurance than ever before. In particular, whole slide imaging may play an important role in this process and result in a substantial reduction of the overall turnaround time required for slide review at the central location. Above all, this new approach may benefit the large clinical trials organized by oncology cooperative groups, since most of those trials involve complicated logistics owing to enrollment of large number of patients at several remotely located participating institutions.
Virtual phantom magnetic resonance imaging (ViP MRI) on a clinical MRI platform.
Saint-Jalmes, Hervé; Bordelois, Alejandro; Gambarota, Giulio
2018-01-01
The purpose of this study was to implement Virtual Phantom Magnetic Resonance Imaging (ViP MRI), a technique that allows for generating reference signals in MR images using radiofrequency (RF) signals, on a clinical MR system and to test newly designed virtual phantoms. MRI experiments were conducted on a 1.5 T MRI scanner. Electromagnetic modelling of the ViP system was done using the principle of reciprocity. The ViP RF signals were generated using a compact waveform generator (dimensions of 26 cm × 18 cm × 16 cm), connected to a homebuilt 25 mm-diameter RF coil. The ViP RF signals were transmitted to the MRI scanner bore, simultaneously with the acquisition of the signal from the object of interest. Different types of MRI data acquisition (2D and 3D gradient-echo) as well as different phantoms, including the Shepp-Logan phantom, were tested. Furthermore, a uniquely designed virtual phantom - in the shape of a grid - was generated; this newly proposed phantom allows for the investigations of the vendor distortion correction field. High quality MR images of virtual phantoms were obtained. An excellent agreement was found between the experimental data and the inverse cube law, which was the expected functional dependence obtained from the electromagnetic modelling of the ViP system. Short-term time stability measurements yielded a coefficient of variation in the signal intensity over time equal to 0.23% and 0.13% for virtual and physical phantom, respectively. MR images of the virtual grid-shaped phantom were reconstructed with the vendor distortion correction; this allowed for a direct visualization of the vendor distortion correction field. Furthermore, as expected from the electromagnetic modelling of the ViP system, a very compact coil (diameter ~ cm) and very small currents (intensity ~ mA) were sufficient to generate a signal comparable to that of physical phantoms in MRI experiments. The ViP MRI technique was successfully implemented on a clinical MR system. One of the major advantages of ViP MRI over previous approaches is that the generation and transmission of RF signals can be achieved with a self-contained apparatus. As such, the ViP MRI technique is transposable to different platforms (preclinical and clinical) of different vendors. It is also shown here that ViP MRI could be used to generate signals whose characteristics cannot be reproduced by physical objects. This could be exploited to assess MRI system properties, such as the vendor distortion correction field. © 2017 American Association of Physicists in Medicine.
Miksys, N; Xu, C; Beaulieu, L; Thomson, R M
2015-08-07
This work investigates and compares CT image metallic artifact reduction (MAR) methods and tissue assignment schemes (TAS) for the development of virtual patient models for permanent implant brachytherapy Monte Carlo (MC) dose calculations. Four MAR techniques are investigated to mitigate seed artifacts from post-implant CT images of a homogeneous phantom and eight prostate patients: a raw sinogram approach using the original CT scanner data and three methods (simple threshold replacement (STR), 3D median filter, and virtual sinogram) requiring only the reconstructed CT image. Virtual patient models are developed using six TAS ranging from the AAPM-ESTRO-ABG TG-186 basic approach of assigning uniform density tissues (resulting in a model not dependent on MAR) to more complex models assigning prostate, calcification, and mixtures of prostate and calcification using CT-derived densities. The EGSnrc user-code BrachyDose is employed to calculate dose distributions. All four MAR methods eliminate bright seed spot artifacts, and the image-based methods provide comparable mitigation of artifacts compared with the raw sinogram approach. However, each MAR technique has limitations: STR is unable to mitigate low CT number artifacts, the median filter blurs the image which challenges the preservation of tissue heterogeneities, and both sinogram approaches introduce new streaks. Large local dose differences are generally due to differences in voxel tissue-type rather than mass density. The largest differences in target dose metrics (D90, V100, V150), over 50% lower compared to the other models, are when uncorrected CT images are used with TAS that consider calcifications. Metrics found using models which include calcifications are generally a few percent lower than prostate-only models. Generally, metrics from any MAR method and any TAS which considers calcifications agree within 6%. Overall, the studied MAR methods and TAS show promise for further retrospective MC dose calculation studies for various permanent implant brachytherapy treatments.
Dedouit, Fabrice; Saint-Martin, Pauline; Mokrane, Fatima-Zohra; Savall, Frédéric; Rousseau, Hervé; Crubézy, Eric; Rougé, Daniel; Telmon, Norbert
2015-09-01
Virtual anthropology consists of the introduction of modern slice imaging to biological and forensic anthropology. Thanks to this non-invasive scientific revolution, some classifications and staging systems, first based on dry bone analysis, can be applied to cadavers with no need for specific preparation, as well as to living persons. Estimation of bone and dental age is one of the possibilities offered by radiology. Biological age can be estimated in clinical forensic medicine as well as in living persons. Virtual anthropology may also help the forensic pathologist to estimate a deceased person's age at death, which together with sex, geographical origin and stature, is one of the important features determining a biological profile used in reconstructive identification. For this forensic purpose, the radiological tools used are multislice computed tomography and, more recently, X-ray free imaging techniques such as magnetic resonance imaging and ultrasound investigations. We present and discuss the value of these investigations for age estimation in anthropology.
NASA Astrophysics Data System (ADS)
Tiede, Dirk; Lang, Stefan
2010-11-01
In this paper we focus on the application of transferable, object-based image analysis algorithms for dwelling extraction in a camp for internally displaced people (IDP) in Darfur, Sudan along with innovative means for scientific visualisation of the results. Three very high spatial resolution satellite images (QuickBird: 2002, 2004, 2008) were used for: (1) extracting different types of dwellings and (2) calculating and visualizing added-value products such as dwelling density and camp structure. The results were visualized on virtual globes (Google Earth and ArcGIS Explorer) revealing the analysis results (analytical 3D views,) transformed into the third dimension (z-value). Data formats depend on virtual globe software including KML/KMZ (keyhole mark-up language) and ESRI 3D shapefiles streamed as ArcGIS Server-based globe service. In addition, means for improving overall performance of automated dwelling structures using grid computing techniques are discussed using examples from a similar study.
A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.
Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V
2014-01-01
The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.
NASA Astrophysics Data System (ADS)
Hueso, R.; Juaristi, J.; Legarreta, J.; Sánchez-Lavega, A.; Rojas, J. F.; Erard, S.; Cecconi, B.; Le Sidaner, Pierre
2018-01-01
Since 2003 the Planetary Virtual Observatory and Laboratory (PVOL) has been storing and serving publicly through its web site a large database of amateur observations of the Giant Planets (Hueso et al., 2010a). These images are used for scientific research of the atmospheric dynamics and cloud structure on these planets and constitute a powerful resource to address time variable phenomena in their atmospheres. Advances over the last decade in observation techniques, and a wider recognition by professional astronomers of the quality of amateur observations, have resulted in the need to upgrade this database. We here present major advances in the PVOL database, which has evolved into a full virtual planetary observatory encompassing also observations of Mercury, Venus, Mars, the Moon and the Galilean satellites. Besides the new objects, the images can be tagged and the database allows simple and complex searches over the data. The new web service: PVOL2 is available online in http://pvol2.ehu.eus/.
VirSSPA- a virtual reality tool for surgical planning workflow.
Suárez, C; Acha, B; Serrano, C; Parra, C; Gómez, T
2009-03-01
A virtual reality tool, called VirSSPA, was developed to optimize the planning of surgical processes. Segmentation algorithms for Computed Tomography (CT) images: a region growing procedure was used for soft tissues and a thresholding algorithm was implemented to segment bones. The algorithms operate semiautomati- cally since they only need seed selection with the mouse on each tissue segmented by the user. The novelty of the paper is the adaptation of an enhancement method based on histogram thresholding applied to CT images for surgical planning, which simplifies subsequent segmentation. A substantial improvement of the virtual reality tool VirSSPA was obtained with these algorithms. VirSSPA was used to optimize surgical planning, to decrease the time spent on surgical planning and to improve operative results. The success rate increases due to surgeons being able to see the exact extent of the patient's ailment. This tool can decrease operating room time, thus resulting in reduced costs. Virtual simulation was effective for optimizing surgical planning, which could, consequently, result in improved outcomes with reduced costs.
Virtual plane-wave imaging via Marchenko redatuming
NASA Astrophysics Data System (ADS)
Meles, Giovanni Angelo; Wapenaar, Kees; Thorbecke, Jan
2018-04-01
Marchenko redatuming is a novel scheme used to retrieve up- and down-going Green's functions in an unknown medium. Marchenko equations are based on reciprocity theorems and are derived on the assumption of the existence of functions exhibiting space-time focusing properties once injected in the subsurface. In contrast to interferometry but similarly to standard migration methods, Marchenko redatuming only requires an estimate of the direct wave from the virtual source (or to the virtual receiver), illumination from only one side of the medium, and no physical sources (or receivers) inside the medium. In this contribution we consider a different time-focusing condition within the frame of Marchenko redatuming that leads to the retrieval of virtual plane-wave responses. As a result, it allows multiple-free imaging using only a one-dimensional sampling of the targeted model at a fraction of the computational cost of standard Marchenko schemes. The potential of the new method is demonstrated on 2D synthetic models.
Virtual slides in peer reviewed, open access medical publication.
Kayser, Klaus; Borkenfeld, Stephan; Goldmann, Torsten; Kayser, Gian
2011-12-19
Application of virtual slides (VS), the digitalization of complete glass slides, is in its infancy to be implemented in routine diagnostic surgical pathology and to issues that are related to tissue-based diagnosis, such as education and scientific publication. Electronic publication in Pathology offers new features of scientific communication in pathology that cannot be obtained by conventional paper based journals. Most of these features are based upon completely open or partly directed interaction between the reader and the system that distributes the article. One of these interactions can be applied to microscopic images allowing the reader to navigate and magnify the presented images. VS and interactive Virtual Microscopy (VM) are a tool to increase the scientific value of microscopic images. The open access journal Diagnostic Pathology http://www.diagnosticpathology.org has existed for about five years. It is a peer reviewed journal that publishes all types of scientific contributions, including original scientific work, case reports and review articles. In addition to digitized still images the authors of appropriate articles are requested to submit the underlying glass slides to an institution (DiagnomX.eu, and Leica.com) for digitalization and documentation. The images are stored in a separate image data bank which is adequately linked to the article. The normal review process is not involved. Both processes (peer review and VS acquisition) are performed contemporaneously in order to minimize a potential publication delay. VS are not provided with a DOI index (digital object identifier). The first articles that include VS were published in March 2011. Several logistic constraints had to be overcome until the first articles including VS could be published. Step by step an automated acquisition and distribution system had to be implemented to the corresponding article. The acceptance of VS by the reader is high as well as by the authors. Of specific value are the increased confidence to and reputation of authors as well as the presented information to the reader. Additional associated functions such as access to author-owned related image collections, reader-controlled automated image measurements and image transformations are in preparation. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1232133347629819.
Integration of stereotactic ultrasonic data into an interactive image-guided neurosurgical system
NASA Astrophysics Data System (ADS)
Shima, Daniel W.; Galloway, Robert L., Jr.
1998-06-01
Stereotactic ultrasound can be incorporated into an interactive, image-guide neurosurgical system by using an optical position sensor to define the location of an intraoperative scanner in physical space. A C-program has been developed that communicates with the OptotrakTM system developed by Northern Digital Inc. to optically track the three-dimensional position and orientation of a fan-shaped area defined with respect to a hand-held probe. (i.e., a virtual B-mode ultrasound fan beam) Volumes of CT and MR head scans from the same patient are registered to a location in physical space using a point-based technique. The coordinates of the virtual fan beam in physical space are continuously calculated and updated on-the-fly. During each program loop, the CT and MR data volumes are reformatted along the same plane and displayed as two fan-shaped images that correspond to the current physical-space location of the virtual fan beam. When the reformatted preoperative tomographic images are eventually paired with a real-time intraoperative ultrasound image, a neurosurgeon will be able to use the unique information of each imaging modality (e.g., the high resolution and tissue contrast of CT and MR and the real-time functionality of ultrasound) in a complementary manner to identify structures in the brain more easily and to guide surgical procedures more effectively.
Stecco, A; Volpe, D; Volpe, N; Fornara, P; Castagna, A; Carriero, A
2008-12-01
The purpose of this study was to compare virtual MR arthroscopic reconstructions with arthroscopic images in patients affected by shoulder joint instability. MR arthrography (MR-AR) of the shoulder is now a well-assessed technique, based on the injection of a contrast medium solution, which fills the articular space and finds its way between the rotator cuff (RC) and the glenohumeral ligaments. In patients with glenolabral pathology, we used an additional sequence that provided virtual arthroscopy (VA) post-processed views, which completed the MR evaluation of shoulder pathology. We enrolled 36 patients, from whom MR arthrographic sequence data (SE T1w and GRE T1 FAT SAT) were obtained using a GE 0.5 T Signa--before any surgical or arthroscopic planned treatment; the protocol included a supplemental 3D, spoiled GE T1w positioned in the coronal plane. Dedicated software loaded on a work-station was used to elaborate VAs. Two radiologists evaluated, on a semiquantitative scale, the visibility of the principal anatomic structures, and then, in consensus, the pathology emerging from the VA images. These images were reconstructed in all patients, except one. The visualization of all anatomical structures was acceptable. VA and MR arthrographic images were fairly concordant with intraoperative findings. Although in our pilot study the VA findings did not change the surgical planning, the results showed concordance with the surgical or arthroscopic images.
[3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].
Kneist, W; Huber, T; Paschold, M; Lang, H
2016-06-01
The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.
Adaptive noise correction of dual-energy computed tomography images.
Maia, Rafael Simon; Jacob, Christian; Hara, Amy K; Silva, Alvin C; Pavlicek, William; Mitchell, J Ross
2016-04-01
Noise reduction in material density images is a necessary preprocessing step for the correct interpretation of dual-energy computed tomography (DECT) images. In this paper we describe a new method based on a local adaptive processing to reduce noise in DECT images An adaptive neighborhood Wiener (ANW) filter was implemented and customized to use local characteristics of material density images. The ANW filter employs a three-level wavelet approach, combined with the application of an anisotropic diffusion filter. Material density images and virtual monochromatic images are noise corrected with two resulting noise maps. The algorithm was applied and quantitatively evaluated in a set of 36 images. From that set of images, three are shown here, and nine more are shown in the online supplementary material. Processed images had higher signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) than the raw material density images. The average improvements in SNR and CNR for the material density images were 56.5 and 54.75%, respectively. We developed a new DECT noise reduction algorithm. We demonstrate throughout a series of quantitative analyses that the algorithm improves the quality of material density images and virtual monochromatic images.
Demonstration of a real-time implementation of the ICVision holographic stereogram display
NASA Astrophysics Data System (ADS)
Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel
1995-07-01
There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.
Stereoscopic vascular models of the head and neck: A computed tomography angiography visualization.
Cui, Dongmei; Lynch, James C; Smith, Andrew D; Wilson, Timothy D; Lehman, Michael N
2016-01-01
Computer-assisted 3D models are used in some medical and allied health science schools; however, they are often limited to online use and 2D flat screen-based imaging. Few schools take advantage of 3D stereoscopic learning tools in anatomy education and clinically relevant anatomical variations when teaching anatomy. A new approach to teaching anatomy includes use of computed tomography angiography (CTA) images of the head and neck to create clinically relevant 3D stereoscopic virtual models. These high resolution images of the arteries can be used in unique and innovative ways to create 3D virtual models of the vasculature as a tool for teaching anatomy. Blood vessel 3D models are presented stereoscopically in a virtual reality environment, can be rotated 360° in all axes, and magnified according to need. In addition, flexible views of internal structures are possible. Images are displayed in a stereoscopic mode, and students view images in a small theater-like classroom while wearing polarized 3D glasses. Reconstructed 3D models enable students to visualize vascular structures with clinically relevant anatomical variations in the head and neck and appreciate spatial relationships among the blood vessels, the skull and the skin. © 2015 American Association of Anatomists.
A 3-D mixed-reality system for stereoscopic visualization of medical dataset.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
2009-11-01
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
On validating remote sensing simulations using coincident real data
NASA Astrophysics Data System (ADS)
Wang, Mingming; Yao, Wei; Brown, Scott; Goodenough, Adam; van Aardt, Jan
2016-05-01
The remote sensing community often requires data simulation, either via spectral/spatial downsampling or through virtual, physics-based models, to assess systems and algorithms. The Digital Imaging and Remote Sensing Image Generation (DIRSIG) model is one such first-principles, physics-based model for simulating imagery for a range of modalities. Complex simulation of vegetation environments subsequently has become possible, as scene rendering technology and software advanced. This in turn has created questions related to the validity of such complex models, with potential multiple scattering, bidirectional distribution function (BRDF), etc. phenomena that could impact results in the case of complex vegetation scenes. We selected three sites, located in the Pacific Southwest domain (Fresno, CA) of the National Ecological Observatory Network (NEON). These sites represent oak savanna, hardwood forests, and conifer-manzanita-mixed forests. We constructed corresponding virtual scenes, using airborne LiDAR and imaging spectroscopy data from NEON, ground-based LiDAR data, and field-collected spectra to characterize the scenes. Imaging spectroscopy data for these virtual sites then were generated using the DIRSIG simulation environment. This simulated imagery was compared to real AVIRIS imagery (15m spatial resolution; 12 pixels/scene) and NEON Airborne Observation Platform (AOP) data (1m spatial resolution; 180 pixels/scene). These tests were performed using a distribution-comparison approach for select spectral statistics, e.g., established the spectra's shape, for each simulated versus real distribution pair. The initial comparison results of the spectral distributions indicated that the shapes of spectra between the virtual and real sites were closely matched.
Augmented reality-guided artery-first pancreatico-duodenectomy.
Marzano, Ettore; Piardi, Tullio; Soler, Luc; Diana, Michele; Mutter, Didier; Marescaux, Jacques; Pessaux, Patrick
2013-11-01
Augmented Reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative work-up and real-time patient images with the aim to visualize unapparent anatomical details. The potential of AR navigation as a tool to improve safety of the surgical dissection is presented in a case of pancreatico-duodenectomy (PD). A 77-year-old male patient underwent an AR-assisted PD. The 3D virtual anatomical model was obtained from thoraco-abdominal CT scan using customary software (VR-RENDER®, IRCAD). The virtual model was superimposed to the operative field using an Exoscope (VITOM®, Karl Storz, Tüttlingen, Germany) as well as different visible landmarks (inferior vena cava, left renal vein, aorta, superior mesenteric vein, inferior margin of the pancreas). A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Dissection of the superior mesenteric artery and the hanging maneuver were performed under AR guidance along the hanging plane. AR allowed for precise and safe recognition of all the important vascular structures. Operative time was 360 min. AR display and fine registration was performed within 6 min. The postoperative course was uneventful. The pathology was positive for ampullary adenocarcinoma; the final stage was pT1N0 (0/43 retrieved lymph nodes) with clear surgical margins. AR is a valuable navigation tool that can enhance the ability to achieve a safe surgical resection during PD.
Sonic intelligence as a virtual therapeutic environment.
Tarnanas, Ioannis; Adam, Dimitrios
2003-06-01
This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.
Shono, Naoyuki; Kin, Taichi; Nomura, Seiji; Miyawaki, Satoru; Saito, Toki; Imai, Hideaki; Nakatomi, Hirofumi; Oyama, Hiroshi; Saito, Nobuhito
2018-05-01
A virtual reality simulator for aneurysmal clipping surgery is an attractive research target for neurosurgeons. Brain deformation is one of the most important functionalities necessary for an accurate clipping simulator and is vastly affected by the status of the supporting tissue, such as the arachnoid membrane. However, no virtual reality simulator implementing the supporting tissue of the brain has yet been developed. To develop a virtual reality clipping simulator possessing interactive brain deforming capability closely dependent on arachnoid dissection and apply it to clinical cases. Three-dimensional computer graphics models of cerebral tissue and surrounding structures were extracted from medical images. We developed a new method for modifiable cerebral tissue complex deformation by incorporating a nonmedical image-derived virtual arachnoid/trabecula in a process called multitissue integrated interactive deformation (MTIID). MTIID made it possible for cerebral tissue complexes to selectively deform at the site of dissection. Simulations for 8 cases of actual clipping surgery were performed before surgery and evaluated for their usefulness in surgical approach planning. Preoperatively, each operative field was precisely reproduced and visualized with the virtual brain retraction defined by users. The clear visualization of the optimal approach to treating the aneurysm via an appropriate arachnoid incision was possible with MTIID. A virtual clipping simulator mainly focusing on supporting tissues and less on physical properties seemed to be useful in the surgical simulation of cerebral aneurysm clipping. To our knowledge, this article is the first to report brain deformation based on supporting tissues.
NASA Technical Reports Server (NTRS)
Begault, Durand R.; Null, Cynthia H. (Technical Monitor)
1997-01-01
This talk will overview the basic technologies related to the creation of virtual acoustic images, and the potential of including spatial auditory displays in human-machine interfaces. Research into the perceptual error inherent in both natural and virtual spatial hearing is reviewed, since the formation of improved technologies is tied to psychoacoustic research. This includes a discussion of Head Related Transfer Function (HRTF) measurement techniques (the HRTF provides important perceptual cues within a virtual acoustic display). Many commercial applications of virtual acoustics have so far focused on games and entertainment ; in this review, other types of applications are examined, including aeronautic safety, voice communications, virtual reality, and room acoustic simulation. In particular, the notion that realistic simulation is optimized within a virtual acoustic display when head motion and reverberation cues are included within a perceptual model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woelfelschneider, J; Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, DE; Seregni, M
2015-06-15
Purpose: Tumor tracking is an advanced technique to treat intra-fractionally moving tumors. The aim of this study is to validate a surrogate-driven model based on four-dimensional computed tomography (4DCT) that is able to predict CT volumes corresponding to arbitrary respiratory states. Further, the comparison of three different driving surrogates is evaluated. Methods: This study is based on multiple 4DCTs of two patients treated for bronchial carcinoma and metastasis. Analyses for 18 additional patients are currently ongoing. The motion model was estimated from the planning 4DCT through deformable image registration. To predict a certain phase of a follow-up 4DCT, the modelmore » considers for inter-fractional variations (baseline correction) and intra-fractional respiratory parameters (amplitude and phase) derived from surrogates. In this evaluation, three different approaches were used to extract the motion surrogate: for each 4DCT phase, the 3D thoraco-abdominal surface motion, the body volume and the anterior-posterior motion of a virtual single external marker defined on the sternum were investigated. The estimated volumes resulting from the model were compared to the ground-truth clinical 4DCTs using absolute HU differences in the lung volume and landmarks localized using the Scale Invariant Feature Transform (SIFT). Results: The results show absolute HU differences between estimated and ground-truth images with median values limited to 55 HU and inter-quartile ranges (IQR) lower than 100 HU. Median 3D distances between about 1500 matching landmarks are below 2 mm for 3D surface motion and body volume methods. The single marker surrogates Result in increased median distances up to 0.6 mm. Analyses for the extended database incl. 20 patients are currently in progress. Conclusion: The results depend mainly on the image quality of the initial 4DCTs and the deformable image registration. All investigated surrogates can be used to estimate follow-up 4DCT phases, however uncertainties decrease for three-dimensional approaches. This work was funded in parts by the German Research Council (DFG) - KFO 214/2.« less
Weniger, Godehard; Siemerkus, Jakob; Barke, Antonia; Lange, Claudia; Ruhleder, Mirjana; Sachsse, Ulrich; Schmidt-Samoa, Carsten; Dechent, Peter; Irle, Eva
2013-05-30
Present neuroimaging findings suggest two subtypes of trauma response, one characterized predominantly by hyperarousal and intrusions, and the other primarily by dissociative symptoms. The neural underpinnings of these two subtypes need to be better defined. Fourteen women with childhood abuse and the current diagnosis of dissociative amnesia or dissociative identity disorder but without posttraumatic stress disorder (PTSD) and 14 matched healthy comparison subjects underwent functional magnetic resonance imaging (fMRI) while finding their way in a virtual maze. The virtual maze presented a first-person view (egocentric), lacked any topographical landmarks and could be learned only by using egocentric navigation strategies. Participants with dissociative disorders (DD) were not impaired in learning the virtual maze when compared with controls, and showed a similar, although weaker, pattern of activity changes during egocentric learning when compared with controls. Stronger dissociative disorder severity of participants with DD was related to better virtual maze performance, and to stronger activity increase within the cingulate gyrus and the precuneus. Our results add to the present knowledge of preserved attentional and visuospatial mnemonic functioning in individuals with DD. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Design and Development of a Virtual Facility Tour Using iPIX(TM) Technology
NASA Technical Reports Server (NTRS)
Farley, Douglas L.
2002-01-01
The capabilities of the iPIX virtual tour software, in conjunction with a web-based interface create a unique and valuable system that provides users with an efficient virtual capability to tour facilities while being able to acquire the necessary technical content is demonstrated. A users guide to the Mechanics and Durability Branch's virtual tour is presented. The guide provides the user with instruction on operating both scripted and unscripted tours as well as a discussion of the tours for Buildings 1148, 1205 and 1256 and NASA Langley Research Center. Furthermore, an indepth discussion has been presented on how to develop a virtual tour using the iPIX software interface with conventional html and JavaScript. The main aspects for discussion are on network and computing issues associated with using this capability. A discussion of how to take the iPIX pictures, manipulate them and bond them together to form hemispherical images is also presented. Linking of images with additional multimedia content is discussed. Finally, a method to integrate the iPIX software with conventional HTML and JavaScript to facilitate linking with multi-media is presented.
Implementation of a Virtual Microphone Array to Obtain High Resolution Acoustic Images
Izquierdo, Alberto; Suárez, Luis; Suárez, David
2017-01-01
Using arrays with digital MEMS (Micro-Electro-Mechanical System) microphones and FPGA-based (Field Programmable Gate Array) acquisition/processing systems allows building systems with hundreds of sensors at a reduced cost. The problem arises when systems with thousands of sensors are needed. This work analyzes the implementation and performance of a virtual array with 6400 (80 × 80) MEMS microphones. This virtual array is implemented by changing the position of a physical array of 64 (8 × 8) microphones in a grid with 10 × 10 positions, using a 2D positioning system. This virtual array obtains an array spatial aperture of 1 × 1 m2. Based on the SODAR (SOund Detection And Ranging) principle, the measured beampattern and the focusing capacity of the virtual array have been analyzed, since beamforming algorithms assume to be working with spherical waves, due to the large dimensions of the array in comparison with the distance between the target (a mannequin) and the array. Finally, the acoustic images of the mannequin, obtained for different frequency and range values, have been obtained, showing high angular resolutions and the possibility to identify different parts of the body of the mannequin. PMID:29295485
Virtual surgery in a (tele-)radiology framework.
Glombitza, G; Evers, H; Hassfeld, S; Engelmann, U; Meinzer, H P
1999-09-01
This paper presents telemedicine as an extension of a teleradiology framework through tools for virtual surgery. To classify the described methods and applications, the research field of virtual reality (VR) is broadly reviewed. Differences with respect to technical equipment, methodological requirements and areas of application are pointed out. Desktop VR, augmented reality, and virtual reality are differentiated and discussed in some typical contexts of diagnostic support, surgical planning, therapeutic procedures, simulation and training. Visualization techniques are compared as a prerequisite for virtual reality and assigned to distinct levels of immersion. The advantage of a hybrid visualization kernel is emphasized with respect to the desktop VR applications that are subsequently shown. Moreover, software design aspects are considered by outlining functional openness in the architecture of the host system. Here, a teleradiology workstation was extended by dedicated tools for surgical planning through a plug-in mechanism. Examples of recent areas of application are introduced such as liver tumor resection planning, diagnostic support in heart surgery, and craniofacial surgery planning. In the future, surgical planning systems will become more important. They will benefit from improvements in image acquisition and communication, new image processing approaches, and techniques for data presentation. This will facilitate preoperative planning and intraoperative applications.
Seraglia, Bruno; Gamberini, Luciano; Priftis, Konstantinos; Scatturin, Pietro; Martinelli, Massimiliano; Cutini, Simone
2011-01-01
For over two decades Virtual Reality (VR) has been used as a useful tool in several fields, from medical and psychological treatments, to industrial and military applications. Only in recent years researchers have begun to study the neural correlates that subtend VR experiences. Even if the functional Magnetic Resonance Imaging (fMRI) is the most common and used technique, it suffers several limitations and problems. Here we present a methodology that involves the use of a new and growing brain imaging technique, functional Near-infrared Spectroscopy (fNIRS), while participants experience immersive VR. In order to allow a proper fNIRS probe application, a custom-made VR helmet was created. To test the adapted helmet, a virtual version of the line bisection task was used. Participants could bisect the lines in a virtual peripersonal or extrapersonal space, through the manipulation of a Nintendo Wiimote ® controller in order for the participants to move a virtual laser pointer. Although no neural correlates of the dissociation between peripersonal and extrapersonal space were found, a significant hemodynamic activity with respect to the baseline was present in the right parietal and occipital areas. Both advantages and disadvantages of the presented methodology are discussed.
Fast Virtual Stenting with Active Contour Models in Intracranical Aneurysm
Zhong, Jingru; Long, Yunling; Yan, Huagang; Meng, Qianqian; Zhao, Jing; Zhang, Ying; Yang, Xinjian; Li, Haiyun
2016-01-01
Intracranial stents are becoming increasingly a useful option in the treatment of intracranial aneurysms (IAs). Image simulation of the releasing stent configuration together with computational fluid dynamics (CFD) simulation prior to intervention will help surgeons optimize intervention scheme. This paper proposed a fast virtual stenting of IAs based on active contour model (ACM) which was able to virtually release stents within any patient-specific shaped vessel and aneurysm models built on real medical image data. In this method, an initial stent mesh was generated along the centerline of the parent artery without the need for registration between the stent contour and the vessel. Additionally, the diameter of the initial stent volumetric mesh was set to the maximum inscribed sphere diameter of the parent artery to improve the stenting accuracy and save computational cost. At last, a novel criterion for terminating virtual stent expanding that was based on the collision detection of the axis aligned bounding boxes was applied, making the stent expansion free of edge effect. The experiment results of the virtual stenting and the corresponding CFD simulations exhibited the efficacy and accuracy of the ACM based method, which are valuable to intervention scheme selection and therapy plan confirmation. PMID:26876026
Virtual photon polarization and dilepton anisotropy in relativistic nucleus-nucleus collisions
NASA Astrophysics Data System (ADS)
Speranza, Enrico; Jaiswal, Amaresh; Friman, Bengt
2018-07-01
The polarization of virtual photons produced in relativistic nucleus-nucleus collisions provides information on the conditions in the emitting medium. In a hydrodynamic framework, the resulting angular anisotropy of the dilepton final state depends on the flow as well as on the transverse momentum and invariant mass of the photon. We illustrate these effects in dilepton production from quark-antiquark annihilation in the QGP phase and π+π- annihilation in the hadronic phase for a static medium in global equilibrium and for a longitudinally expanding system.
Innovative application of virtual display technique in virtual museum
NASA Astrophysics Data System (ADS)
Zhang, Jiankang
2017-09-01
Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.
Dual energy CT kidney stone differentiation in photon counting computed tomography
NASA Astrophysics Data System (ADS)
Gutjahr, R.; Polster, C.; Henning, A.; Kappler, S.; Leng, S.; McCollough, C. H.; Sedlmair, M. U.; Schmidt, B.; Krauss, B.; Flohr, T. G.
2017-03-01
This study evaluates the capabilities of a whole-body photon counting CT system to differentiate between four common kidney stone materials, namely uric acid (UA), calcium oxalate monohydrate (COM), cystine (CYS), and apatite (APA) ex vivo. Two different x-ray spectra (120 kV and 140 kV) were applied and two acquisition modes were investigated. The macro-mode generates two energy threshold based image-volumes and two energy bin based image-volumes. In the chesspattern-mode four energy thresholds are applied. A virtual low energy image, as well as a virtual high energy image are derived from initial threshold-based images, while considering their statistically correlated nature. The energy bin based images of the macro-mode, as well as the virtual low and high energy image of the chesspattern-mode serve as input for our dual energy evaluation. The dual energy ratio of the individually segmented kidney stones were utilized to quantify the discriminability of the different materials. The dual energy ratios of the two acquisition modes showed high correlation for both applied spectra. Wilcoxon-rank sum tests and the evaluation of the area under the receiver operating characteristics curves suggest that the UA kidney stones are best differentiable from all other materials (AUC = 1.0), followed by CYS (AUC ≍ 0.9 compared against COM and APA). COM and APA, however, are hardly distinguishable (AUC between 0.63 and 0.76). The results hold true for the measurements of both spectra and both acquisition modes.
Open Virtual Worlds as Pedagogical Research Tools: Learning from the Schome Park Programme
NASA Astrophysics Data System (ADS)
Twining, Peter; Peachey, Anna
This paper introduces the term Open Virtual Worlds and argues that they are ‘unclaimed educational spaces’, which provide a valuable tool for researching pedagogy. Having explored these claims the way in which Teen Second Life® virtual world was used for pedagogical experimentation in the initial phases of the Schome Park Programme is described. Four sets of pedagogical dimensions that emerged are presented and illustrated with examples from the Schome Park Programme.
Multiaccommodative stimuli in VR systems: problems & solutions.
Marran, L; Schor, C
1997-09-01
Virtual reality environments can introduce multiple and sometimes conflicting accommodative stimuli. For instance, with the high-powered lenses commonly used in head-mounted displays, small discrepancies in screen lens placement, caused by manufacturer error or user adjustment focus error, can change the focal depths of the image by a couple of diopters. This can introduce a binocular accommodative stimulus or, if the displacement between the two screens is unequal, an unequal (anisometropic) accommodative stimulus for the two eyes. Systems that allow simultaneous viewing of virtual and real images can also introduce a conflict in accommodative stimuli: When real and virtual images are at different focal planes, both cannot be in focus at the same time, though they may appear to be in similar locations in space. In this paper four unique designs are described that minimize the range of accommodative stimuli and maximize the visual system's ability to cope efficiently with the focus conflicts that remain: pinhole optics, monocular lens addition combined with aniso-accommodation, chromatic bifocal, and bifocal lens system. The advantages and disadvantages of each design are described and recommendation for design choice is given after consideration of the end use of the virtual reality system (e.g., low or high end, entertainment, technical, or medical use). The appropriate design modifications should allow greater user comfort and better performance.
Phases and Patterns of Group Development in Virtual Learning Teams
ERIC Educational Resources Information Center
Yoon, Seung Won; Johnson, Scott D.
2008-01-01
With the advancement of Internet communication technologies, distributed work groups have great potential for remote collaboration and use of collective knowledge. Adopting the Complex Adaptive System (CAS) perspective (McGrath, Arrow, & Berdhal, "Personal Soc Psychol Rev" 4 (2000) 95), which views virtual learning teams as an adaptive and…
Evaluation of the Virtual Physiology of Exercise Laboratory Program
ERIC Educational Resources Information Center
Dobson, John L.
2009-01-01
The Virtual Physiology of Exercise Laboratory (VPEL) program was created to simulate the test design, data collection, and analysis phases of selected exercise physiology laboratories. The VPEL program consists of four modules: (1) cardiovascular, (2) maximal O[subscript 2] consumption [Vo[subscript 2max], (3) lactate and ventilatory thresholds,…
Exploring Moral Action Using lmmersive Virtual Reality
2016-10-01
the Obedience. in The Bar experimental scenario is in the context of sexual harassment and has two phases, a ll in immersive virtual rea lity. In...a paper for submission to a high impact journal (depending of course on the final resu lts). 4. Conclusions The original proposal set out the
In vivo differentiation of complementary contrast media at dual-energy CT.
Mongan, John; Rathnayake, Samira; Fu, Yanjun; Wang, Runtang; Jones, Ella F; Gao, Dong-Wei; Yeh, Benjamin M
2012-10-01
To evaluate the feasibility of using a commercially available clinical dual-energy computed tomographic (CT) scanner to differentiate the in vivo enhancement due to two simultaneously administered contrast media with complementary x-ray attenuation ratios. Approval from the institutional animal care and use committee was obtained, and National Institutes of Health guidelines for the care and use of laboratory animals were observed. Dual-energy CT was performed in a set of iodine and tungsten solution phantoms and in a rabbit in which iodinated intravenous and bismuth subsalicylate oral contrast media were administered. In addition, a second rabbit was studied after intravenous administration of iodinated and tungsten cluster contrast media. Images were processed to produce virtual monochromatic images that simulated the appearance of conventional single-energy scans, as well as material decomposition images that separate the attenuation due to each contrast medium. Clear separation of each of the contrast media pairs was seen in the phantom and in both in vivo animal models. Separation of bowel lumen from vascular contrast medium allowed visualization of bowel wall enhancement that was obscured by intraluminal bowel contrast medium on conventional CT scans. Separation of two vascular contrast media in different vascular phases enabled acquisition of a perfectly coregistered CT angiogram and venous phase-enhanced CT scan simultaneously in a single examination. Commercially available clinical dual-energy CT scanners can help differentiate the enhancement of selected pairs of complementary contrast media in vivo. © RSNA, 2012.
Martin, Simon S; Wichmann, Julian L; Weyer, Hendrik; Albrecht, Moritz H; D'Angelo, Tommaso; Leithner, Doris; Lenga, Lukas; Booz, Christian; Scholtz, Jan-Erik; Bodelle, Boris; Vogl, Thomas J; Hammerstingl, Renate
2017-10-01
The aim of this study was to investigate the impact of noise-optimized virtual monoenergetic imaging (VMI+) reconstructions on quantitative and qualitative image parameters in patients with cutaneous malignant melanoma at thoracoabdominal dual-energy computed tomography (DECT). Seventy-six patients (48 men; 66.6±13.8years) with metastatic cutaneous malignant melanoma underwent DECT of the thorax and abdomen. Images were post-processed with standard linear blending (M_0.6), traditional virtual monoenergetic (VMI), and VMI+ technique. VMI and VMI+ images were reconstructed in 10-keV intervals from 40 to 100keV. Attenuation measurements were performed in cutaneous melanoma lesions, as well as in regional lymph node, subcutaneous and in-transit metastases to calculate objective signal-to-noise (SNR) and contrast-to-noise (CNR) ratios. Five-point scales were used to evaluate overall image quality and lesion delineation by three radiologists with different levels of experience. Objective indices SNR and CNR were highest at 40-keV VMI+ series (5.6±2.6 and 12.4±3.4), significantly superior to all other reconstructions (all P<0.001). Qualitative image parameters showed highest values for 50-keV and 60-keV VMI+ reconstructions (median 5, respectively; P≤0.019) regarding overall image quality. Moreover, qualitative assessment of lesion delineation peaked in 40-keV VMI+ (median 5) and 50-keV VMI+ (median 4; P=0.055), significantly superior to all other reconstructions (all P<0.001). Low-keV noise-optimized VMI+ reconstructions substantially increase quantitative and qualitative image parameters, as well as subjective lesion delineation compared to standard image reconstruction and traditional VMI in patients with cutaneous malignant melanoma at thoracoabdominal DECT. Copyright © 2017 Elsevier B.V. All rights reserved.
Wei, Gaofeng; Tang, Gang; Fu, Zengliang; Sun, Qiuming; Tian, Feng
2010-10-01
The China Mechanical Virtual Human (CMVH) is a human musculoskeletal biomechanical simulation platform based on China Visible Human slice images; it has great realistic application significance. In this paper is introduced the construction method of CMVH 3D models. Then a simulation system solution based on Creator/Vega is put forward for the complex and gigantic data characteristics of the 3D models. At last, combined with MFC technology, the CMVH simulation system is developed and a running simulation scene is given. This paper provides a new way for the virtual reality application of CMVH.
Virtual reality for spherical images
NASA Astrophysics Data System (ADS)
Pilarczyk, Rafal; Skarbek, Władysław
2017-08-01
Paper presents virtual reality application framework and application concept for mobile devices. Framework uses Google Cardboard library for Android operating system. Framework allows to create virtual reality 360 video player using standard OpenGL ES rendering methods. Framework provides network methods in order to connect to web server as application resource provider. Resources are delivered using JSON response as result of HTTP requests. Web server also uses Socket.IO library for synchronous communication between application and server. Framework implements methods to create event driven process of rendering additional content based on video timestamp and virtual reality head point of view.
[Clinical pathology on the verge of virtual microscopy].
Tolonen, Teemu; Näpänkangas, Juha; Isola, Jorma
2015-01-01
For more than 100 years, examinations of pathology specimens have relied on the use of the light microscope. The technological progress of the last few years is enabling the digitizing of histologic specimen slides and application of the virtual microscope in diagnostics. Virtual microscopy will facilitate consultation possibilities, and digital image analysis serves to enhance the level of diagnostics. Organizing and monitoring clinicopathological meetings will become easier. Digital archive of histologic specimens and the virtual microscopy network are expected to benefit training and research as well, particularly what applies to the Finnish biobank network which is currently being established.
Transforming Clinical Imaging Data for Virtual Reality Learning Objects
ERIC Educational Resources Information Center
Trelease, Robert B.; Rosset, Antoine
2008-01-01
Advances in anatomical informatics, three-dimensional (3D) modeling, and virtual reality (VR) methods have made computer-based structural visualization a practical tool for education. In this article, the authors describe streamlined methods for producing VR "learning objects," standardized interactive software modules for anatomical sciences…
DOT National Transportation Integrated Search
2005-11-01
In order to extend commercial vehicle enforcement coverage to routes that are not monitored by fixed weigh stations, Kentucky has developed and implemented a Remote Monitoring System (RMS) and a Virtual Weight Station (VWS). The RMS captures images o...
Reconstituted Three-Dimensional Interactive Imaging
NASA Technical Reports Server (NTRS)
Hamilton, Joseph; Foley, Theodore; Duncavage, Thomas; Mayes, Terrence
2010-01-01
A method combines two-dimensional images, enhancing the images as well as rendering a 3D, enhanced, interactive computer image or visual model. Any advanced compiler can be used in conjunction with any graphics library package for this method, which is intended to take digitized images and virtually stack them so that they can be interactively viewed as a set of slices. This innovation can take multiple image sources (film or digital) and create a "transparent" image with higher densities in the image being less transparent. The images are then stacked such that an apparent 3D object is created in virtual space for interactive review of the set of images. This innovation can be used with any application where 3D images are taken as slices of a larger object. These could include machines, materials for inspection, geological objects, or human scanning. Illuminous values were stacked into planes with different transparency levels of tissues. These transparency levels can use multiple energy levels, such as density of CT scans or radioactive density. A desktop computer with enough video memory to produce the image is capable of this work. The memory changes with the size and resolution of the desired images to be stacked and viewed.
NASA Astrophysics Data System (ADS)
Lin, Chien-Liang; Su, Yu-Zheng; Hung, Min-Wei; Huang, Kuo-Cheng
2010-08-01
In recent years, Augmented Reality (AR)[1][2][3] is very popular in universities and research organizations. The AR technology has been widely used in Virtual Reality (VR) fields, such as sophisticated weapons, flight vehicle development, data model visualization, virtual training, entertainment and arts. AR has characteristics to enhance the display output as a real environment with specific user interactive functions or specific object recognitions. It can be use in medical treatment, anatomy training, precision instrument casting, warplane guidance, engineering and distance robot control. AR has a lot of vantages than VR. This system developed combines sensors, software and imaging algorithms to make users feel real, actual and existing. Imaging algorithms include gray level method, image binarization method, and white balance method in order to make accurate image recognition and overcome the effects of light.
Single-pulse interference caused by temporal reflection at moving refractive-index boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plansinis, Brent W.; Donaldson, William R.; Agrawal, Govind P.
Here, we show numerically and analytically that temporal reflections from a moving refractive-index boundary act as an analog of Lloyd’s mirror, allowing a single pulse to produce interference fringes in time as it propagates inside a dispersive medium. This interference can be viewed as the pulse interfering with a virtual pulse that is identical to the first, except for a π-phase shift. Furthermore, if a second moving refractive-index boundary is added to create the analog of an optical waveguide, a single pulse can be self-imaged or made to produce two or more pulses by adjusting the propagation length in amore » process similar to the Talbot effect.« less
Single-pulse interference caused by temporal reflection at moving refractive-index boundaries
Plansinis, Brent W.; Donaldson, William R.; Agrawal, Govind P.
2017-09-29
Here, we show numerically and analytically that temporal reflections from a moving refractive-index boundary act as an analog of Lloyd’s mirror, allowing a single pulse to produce interference fringes in time as it propagates inside a dispersive medium. This interference can be viewed as the pulse interfering with a virtual pulse that is identical to the first, except for a π-phase shift. Furthermore, if a second moving refractive-index boundary is added to create the analog of an optical waveguide, a single pulse can be self-imaged or made to produce two or more pulses by adjusting the propagation length in amore » process similar to the Talbot effect.« less
Undecompressed microbial populations from the deep sea.
Jannasch, H J; Wirsen, C O; Taylor, C D
1976-01-01
Metabolic transformations of glutamate and Casamino Acids by natural microbial populations collected from deep waters (1,600 to 3,100 m) were studied in decompressed and undecompressed samples. Pressure-retaining sampling/incubation vessels and appropriate subsampling/incubation vessels and appropriate subsampling techniques permitted time course experiments. In all cases the metabolic activity in undecompressed samples was lower than it was when incubated at 1 atm. Surface water controls showed a reduced activity upon compression. The processes involving substrate incorporation into cell material were more pressure sensitive than was respiration. The low utilization of substrates, previously found by in situ incubations for up to 12 months, was confirmed and demonstrated to consist of an initial phase of activity, in the range of 5 to 60 times lower than the controls, followed by a stationary phase of virtually no substrate utilization. No barophilic growth response (higher rates at elevated pressure than at 1 atm) was recorded; all populations observed exhibition various degrees of barotolerance. Images PMID:791117
Virtual 3d City Modeling: Techniques and Applications
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3-D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.
A virtual simulator designed for collision prevention in proton therapy.
Jung, Hyunuk; Kum, Oyeon; Han, Youngyih; Park, Hee Chul; Kim, Jin Sung; Choi, Doo Ho
2015-10-01
In proton therapy, collisions between the patient and nozzle potentially occur because of the large nozzle structure and efforts to minimize the air gap. Thus, software was developed to predict such collisions between the nozzle and patient using treatment virtual simulation. Three-dimensional (3D) modeling of a gantry inner-floor, nozzle, and robotic-couch was performed using SolidWorks based on the manufacturer's machine data. To obtain patient body information, a 3D-scanner was utilized right before CT scanning. Using the acquired images, a 3D-image of the patient's body contour was reconstructed. The accuracy of the image was confirmed against the CT image of a humanoid phantom. The machine components and the virtual patient were combined on the treatment-room coordinate system, resulting in a virtual simulator. The simulator simulated the motion of its components such as rotation and translation of the gantry, nozzle, and couch in real scale. A collision, if any, was examined both in static and dynamic modes. The static mode assessed collisions only at fixed positions of the machine's components, while the dynamic mode operated any time a component was in motion. A collision was identified if any voxels of two components, e.g., the nozzle and the patient or couch, overlapped when calculating volume locations. The event and collision point were visualized, and collision volumes were reported. All components were successfully assembled, and the motions were accurately controlled. The 3D-shape of the phantom agreed with CT images within a deviation of 2 mm. Collision situations were simulated within minutes, and the results were displayed and reported. The developed software will be useful in improving patient safety and clinical efficiency of proton therapy.
Deep ensemble learning of virtual endoluminal views for polyp detection in CT colonography
NASA Astrophysics Data System (ADS)
Umehara, Kensuke; Näppi, Janne J.; Hironaka, Toru; Regge, Daniele; Ishida, Takayuki; Yoshida, Hiroyuki
2017-03-01
Robust training of a deep convolutional neural network (DCNN) requires a very large number of annotated datasets that are currently not available in CT colonography (CTC). We previously demonstrated that deep transfer learning provides an effective approach for robust application of a DCNN in CTC. However, at high detection accuracy, the differentiation of small polyps from non-polyps was still challenging. In this study, we developed and evaluated a deep ensemble learning (DEL) scheme for reviewing of virtual endoluminal images to improve the performance of computer-aided detection (CADe) of polyps in CTC. Nine different types of image renderings were generated from virtual endoluminal images of polyp candidates detected by a conventional CADe system. Eleven DCNNs that represented three types of publically available pre-trained DCNN models were re-trained by transfer learning to identify polyps from the virtual endoluminal images. A DEL scheme that determines the final detected polyps by a review of the nine types of VE images was developed by combining the DCNNs using a random forest classifier as a meta-classifier. For evaluation, we sampled 154 CTC cases from a large CTC screening trial and divided the cases randomly into a training dataset and a test dataset. At 3.9 falsepositive (FP) detections per patient on average, the detection sensitivities of the conventional CADe system, the highestperforming single DCNN, and the DEL scheme were 81.3%, 90.7%, and 93.5%, respectively, for polyps ≥6 mm in size. For small polyps, the DEL scheme reduced the number of false positives by up to 83% over that of using a single DCNN alone. These preliminary results indicate that the DEL scheme provides an effective approach for improving the polyp detection performance of CADe in CTC, especially for small polyps.
Adapting line integral convolution for fabricating artistic virtual environment
NASA Astrophysics Data System (ADS)
Lee, Jiunn-Shyan; Wang, Chung-Ming
2003-04-01
Vector field occurs not only extensively in scientific applications but also in treasured art such as sculptures and paintings. Artist depicts our natural environment stressing valued directional feature besides color and shape information. Line integral convolution (LIC), developed for imaging vector field in scientific visualization, has potential of producing directional image. In this paper we present several techniques of exploring LIC techniques to generate impressionistic images forming artistic virtual environment. We take advantage of directional information given by a photograph, and incorporate many investigations to the work including non-photorealistic shading technique and statistical detail control. In particular, the non-photorealistic shading technique blends cool and warm colors into the photograph to imitate artists painting convention. Besides, we adopt statistical technique controlling integral length according to image variance to preserve details. Furthermore, we also propose method for generating a series of mip-maps, which revealing constant strokes under multi-resolution viewing and achieving frame coherence in an interactive walkthrough system. The experimental results show merits of emulating satisfyingly and computing efficiently, as a consequence, relying on the proposed technique successfully fabricates a wide category of non-photorealistic rendering (NPR) application such as interactive virtual environment with artistic perception.
An efficient hole-filling method based on depth map in 3D view generation
NASA Astrophysics Data System (ADS)
Liang, Haitao; Su, Xiu; Liu, Yilin; Xu, Huaiyuan; Wang, Yi; Chen, Xiaodong
2018-01-01
New virtual view is synthesized through depth image based rendering(DIBR) using a single color image and its associated depth map in 3D view generation. Holes are unavoidably generated in the 2D to 3D conversion process. We propose a hole-filling method based on depth map to address the problem. Firstly, we improve the process of DIBR by proposing a one-to-four (OTF) algorithm. The "z-buffer" algorithm is used to solve overlap problem. Then, based on the classical patch-based algorithm of Criminisi et al., we propose a hole-filling algorithm using the information of depth map to handle the image after DIBR. In order to improve the accuracy of the virtual image, inpainting starts from the background side. In the calculation of the priority, in addition to the confidence term and the data term, we add the depth term. In the search for the most similar patch in the source region, we define the depth similarity to improve the accuracy of searching. Experimental results show that the proposed method can effectively improve the quality of the 3D virtual view subjectively and objectively.
Investigating the Use of Cloudbursts for High-Throughput Medical Image Registration
Kim, Hyunjoo; Parashar, Manish; Foran, David J.; Yang, Lin
2010-01-01
This paper investigates the use of clouds and autonomic cloudbursting to support a medical image registration. The goal is to enable a virtual computational cloud that integrates local computational environments and public cloud services on-the-fly, and support image registration requests from different distributed researcher groups with varied computational requirements and QoS constraints. The virtual cloud essentially implements shared and coordinated task-spaces, which coordinates the scheduling of jobs submitted by a dynamic set of research groups to their local job queues. A policy-driven scheduling agent uses the QoS constraints along with performance history and the state of the resources to determine the appropriate size and mix of the public and private cloud resource that should be allocated to a specific request. The virtual computational cloud and the medical image registration service have been developed using the CometCloud engine and have been deployed on a combination of private clouds at Rutgers University and the Cancer Institute of New Jersey and Amazon EC2. An experimental evaluation is presented and demonstrates the effectiveness of autonomic cloudbursts and policy-based autonomic scheduling for this application. PMID:20640235
Coil Compression for Accelerated Imaging with Cartesian Sampling
Zhang, Tao; Pauly, John M.; Vasanawala, Shreyas S.; Lustig, Michael
2012-01-01
MRI using receiver arrays with many coil elements can provide high signal-to-noise ratio and increase parallel imaging acceleration. At the same time, the growing number of elements results in larger datasets and more computation in the reconstruction. This is of particular concern in 3D acquisitions and in iterative reconstructions. Coil compression algorithms are effective in mitigating this problem by compressing data from many channels into fewer virtual coils. In Cartesian sampling there often are fully sampled k-space dimensions. In this work, a new coil compression technique for Cartesian sampling is presented that exploits the spatially varying coil sensitivities in these non-subsampled dimensions for better compression and computation reduction. Instead of directly compressing in k-space, coil compression is performed separately for each spatial location along the fully-sampled directions, followed by an additional alignment process that guarantees the smoothness of the virtual coil sensitivities. This important step provides compatibility with autocalibrating parallel imaging techniques. Its performance is not susceptible to artifacts caused by a tight imaging fieldof-view. High quality compression of in-vivo 3D data from a 32 channel pediatric coil into 6 virtual coils is demonstrated. PMID:22488589
NASA Astrophysics Data System (ADS)
Yang, Yujie; Dong, Di; Shi, Liangliang; Wang, Jun; Yang, Xin; Tian, Jie
2015-03-01
Optical projection tomography (OPT) is a mesoscopic scale optical imaging technique for specimens between 1mm and 10mm. OPT has been proven to be immensely useful in a wide variety of biological applications, such as developmental biology and pathology, but its shortcomings in imaging specimens containing widely differing contrast elements are obvious. The longer exposure for high intensity tissues may lead to over saturation of other areas, whereas a relatively short exposure may cause similarity with surrounding background. In this paper, we propose an approach to make a trade-off between capturing weak signals and revealing more details for OPT imaging. This approach consists of three steps. Firstly, the specimens are merely scanned in 360 degrees above a normal exposure but non-overexposure to acquire the projection data. This reduces the photo bleaching and pre-registration computation compared with multiple different exposures in conventional high dynamic range (HDR) imaging method. Secondly, three virtual channels are produced for each projection image based on the histogram distribution to simulate the low, normal and high exposure images used in the traditional HDR technology in photography. Finally, each virtual channel is normalized to the full gray scale range and three channels are recombined into one image using weighting coefficients optimized by a standard eigen-decomposition method. After applying our approach on the projection data, filtered back projection (FBP) algorithm is carried out for 3-dimentional reconstruction. The neonatal wild-type mouse paw has been scanned to verify this approach. Results demonstrated the effectiveness of the proposed approach.
Highly Sophisticated Virtual Laboratory Instruments in Education
NASA Astrophysics Data System (ADS)
Gaskins, T.
2006-12-01
Many areas of Science have advanced or stalled according to the ability to see what can not normally be seen. Visual understanding has been key to many of the world's greatest breakthroughs, such as discovery of DNAs double helix. Scientists use sophisticated instruments to see what the human eye can not. Light microscopes, scanning electron microscopes (SEM), spectrometers and atomic force microscopes are employed to examine and learn the details of the extremely minute. It's rare that students prior to university have access to such instruments, or are granted full ability to probe and magnify as desired. Virtual Lab, by providing highly authentic software instruments and comprehensive imagery of real specimens, provides them this opportunity. Virtual Lab's instruments let explorers operate virtual devices on a personal computer to examine real specimens. Exhaustive sets of images systematically and robotically photographed at thousands of positions and multiple magnifications and focal points allow students to zoom in and focus on the most minute detail of each specimen. Controls on each Virtual Lab device interactively and smoothly move the viewer through these images to display the specimen as the instrument saw it. Users control position, magnification, focal length, filters and other parameters. Energy dispersion spectrometry is combined with SEM imagery to enable exploration of chemical composition at minute scale and arbitrary location. Annotation capabilities allow scientists, teachers and students to indicate important features or areas. Virtual Lab is a joint project of NASA and the Beckman Institute at the University of Illinois at Urbana- Champaign. Four instruments currently compose the Virtual Lab suite: A scanning electron microscope and companion energy dispersion spectrometer, a high-power light microscope, and a scanning probe microscope that captures surface properties to the level of atoms. Descriptions of instrument operating principles and uses are also part of Virtual Lab. The Virtual Lab software and its increasingly rich collection of specimens are free to anyone. This presentation describes Virtual Lab and its uses in formal and informal education.
Papafaklis, Michail I; Muramatsu, Takashi; Ishibashi, Yuki; Bourantas, Christos V; Fotiadis, Dimitrios I; Brilakis, Emmanouil S; Garcia-Garcia, Héctor M; Escaned, Javier; Serruys, Patrick W; Michalis, Lampros K
2018-03-01
Fractional flow reserve (FFR) has been established as a useful diagnostic tool. The distal coronary pressure to aortic pressure (Pd/Pa) ratio at rest is a simpler physiologic index but also requires the use of the pressure wire, whereas recently proposed virtual functional indices derived from coronary imaging require complex blood flow modelling and/or are time-consuming. Our aim was to test the diagnostic performance of virtual resting Pd/Pa using routine angiographic images and a simple flow model. Three-dimensional quantitative coronary angiography (3D-QCA) was performed in 139 vessels (120 patients) with intermediate lesions assessed by FFR. The resting Pd/Pa for each lesion was assessed by computational fluid dynamics. The discriminatory power of virtual resting Pd/Pa against FFR (reference: ≤0.80) was high (area under the receiver operator characteristic curve [AUC]: 90.5% [95% CI: 85.4-95.6%]). Diagnostic accuracy, sensitivity and specificity for the optimal virtual resting Pd/Pa cut-off (≤0.94) were 84.9%, 90.4% and 81.6%, respectively. Virtual resting Pd/Pa demonstrated superior performance (p<0.001) versus 3D-QCA %area stenosis (AUC: 77.5% [95% CI: 69.8-85.3%]). There was a good correlation between virtual resting Pd/Pa and FFR (r=0.69, p<0.001). Virtual resting Pd/Pa using routine angiographic data and a simple flow model provides fast functional assessment of coronary lesions without requiring the pressure-wire and hyperaemia induction. The high diagnostic performance of virtual resting Pd/Pa for predicting FFR shows promise for using this simple/fast virtual index in clinical practice. Copyright © 2017 Australian and New Zealand Society of Cardiac and Thoracic Surgeons (ANZSCTS) and the Cardiac Society of Australia and New Zealand (CSANZ). Published by Elsevier B.V. All rights reserved.
Virtual Reality in the Assessment and Treatment of Weight-Related Disorders.
Wiederhold, Brenda K; Riva, Giuseppe; Gutiérrez-Maldonado, José
2016-02-01
Virtual Reality (VR) has, for the past two decades, proven to be a useful adjunctive tool for both assessment and treatment of patients with eating disorders and obesity. VR allows an individual to enter scenarios that simulate real-life situations and to encounter food cues known to trigger his/her disordered eating behavior. As well, VR enables three-dimensional figures of the patient's body to be presented, helping him/her to reach an awareness of body image distortion and then providing the opportunity to confront and correct distortions, resulting in a more realistic body image and a decrease in body image dissatisfaction. In this paper, we describe seminal studies in this research area.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garfield, B.R.; Rendell, J.T.
1991-01-01
The present conference discusses the application of schlieren photography in industry, laser fiber-optic high speed photography, holographic visualization of hypervelocity explosions, sub-100-picosec X-ray grating cameras, flash soft X-radiography, a novel approach to synchroballistic photography, a programmable image converter framing camera, high speed readout CCDs, an ultrafast optomechanical camera, a femtosec streak tube, a modular streak camera for laser ranging, and human-movement analysis with real-time imaging. Also discussed are high-speed photography of high-resolution moire patterns, a 2D electron-bombarded CCD readout for picosec electrooptical data, laser-generated plasma X-ray diagnostics, 3D shape restoration with virtual grating phase detection, Cu vapor lasers for highmore » speed photography, a two-frequency picosec laser with electrooptical feedback, the conversion of schlieren systems to high speed interferometers, laser-induced cavitation bubbles, stereo holographic cinematography, a gatable photonic detector, and laser generation of Stoneley waves at liquid-solid boundaries.« less
The functional micro-organization of grid cells revealed by cellular-resolution imaging.
Heys, James G; Rangarajan, Krsna V; Dombeck, Daniel A
2014-12-03
Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater microcircuit-level understanding of the brain's representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to nongrid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: the similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a "Mexican hat"-shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. Copyright © 2014 Elsevier Inc. All rights reserved.
The functional micro-organization of grid cells revealed by cellular-resolution imaging
Heys, James G.; Rangarajan, Krsna V.; Dombeck, Daniel A.
2015-01-01
Summary Establishing how grid cells are anatomically arranged, on a microscopic scale, in relation to their firing patterns in the environment would facilitate a greater micro-circuit level understanding of the brain’s representation of space. However, all previous grid cell recordings used electrode techniques that provide limited descriptions of fine-scale organization. We therefore developed a technique for cellular-resolution functional imaging of medial entorhinal cortex (MEC) neurons in mice navigating a virtual linear track, enabling a new experimental approach to study MEC. Using these methods, we show that grid cells are physically clustered in MEC compared to non-grid cells. Additionally, we demonstrate that grid cells are functionally micro-organized: The similarity between the environment firing locations of grid cell pairs varies as a function of the distance between them according to a “Mexican Hat” shaped profile. This suggests that, on average, nearby grid cells have more similar spatial firing phases than those further apart. PMID:25467986
NASA Astrophysics Data System (ADS)
Broggini, Filippo; Wapenaar, Kees; van der Neut, Joost; Snieder, Roel
2014-01-01
An iterative method is presented that allows one to retrieve the Green's function originating from a virtual source located inside a medium using reflection data measured only at the acquisition surface. In addition to the reflection response, an estimate of the travel times corresponding to the direct arrivals is required. However, no detailed information about the heterogeneities in the medium is needed. The iterative scheme generalizes the Marchenko equation for inverse scattering to the seismic reflection problem. To give insight in the mechanism of the iterative method, its steps for a simple layered medium are analyzed using physical arguments based on the stationary phase method. The retrieved Green's wavefield is shown to correctly contain the multiples due to the inhomogeneities present in the medium. Additionally, a variant of the iterative scheme enables decomposition of the retrieved wavefield into its downgoing and upgoing components. These wavefields then enable creation of a ghost-free image of the medium with either cross correlation or multidimensional deconvolution, presenting an advantage over standard prestack migration.
A new approach towards image based virtual 3D city modeling by using close range photogrammetry
NASA Astrophysics Data System (ADS)
Singh, S. P.; Jain, K.; Mandla, V. R.
2014-05-01
3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.
Rezakova, M V; Mazhirina, K G; Pokrovskiy, M A; Savelov, A A; Savelova, O A; Shtark, M B
2013-04-01
Using functional magnetic resonance imaging technique, we performed online brain mapping of gamers, practiced to voluntary (cognitively) control their heart rate, the parameter that operated a competitive virtual gameplay in the adaptive feedback loop. With the default start picture, the regions of interest during the formation of optimal cognitive strategy were as follows: Brodmann areas 19, 37, 39 and 40, i.e. cerebellar structures (vermis, amygdala, pyramids, clivus). "Localization" concept of the contribution of the cerebellum to cognitive processes is discussed.
PP and PS interferometric images of near-seafloor sediments
Haines, S.S.
2011-01-01
I present interferometric processing examples from an ocean-bottom cable OBC dataset collected at a water depth of 800 m in the Gulf of Mexico. Virtual source and receiver gathers created through cross-correlation of full wavefields show clear PP reflections and PS conversions from near-seafloor layers of interest. Virtual gathers from wavefield-separated data show improved PP and PS arrivals. PP and PS brute stacks from the wavefield-separated data compare favorably with images from a non-interferometric processing flow. ?? 2011 Society of Exploration Geophysicists.
Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B
2016-08-01
Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.
Development of a teledermatopathology consultation system using virtual slides
2012-01-01
Background An online consultation system using virtual slides (whole slide images; WSI) has been developed for pathological diagnosis, and could help compensate for the shortage of pathologists, especially in the field of dermatopathology and in other fields dealing with difficult cases. This study focused on the performance and future potential of the system. Method In our system, histological specimens on slide glasses are digitalized by a virtual slide instrument, converted into web data, and up-loaded to an open server. Using our own purpose-built online system, we then input patient details such as age, gender, affected region, clinical data, past history and other related items. We next select up to ten consultants. Finally we send an e-mail to all consultants simultaneously through a single command. The consultant receives an e-mail containing an ID and password which is used to access the open server and inspect the images and other data associated with the case. The consultant makes a diagnosis, which is sent to us along with comments. Because this was a pilot study, we also conducted several questionnaires with consultants concerning the quality of images, operability, usability, and other issues. Results We solicited consultations for 36 cases, including cases of tumor, and involving one to eight consultants in the field of dermatopathology. No problems were noted concerning the images or the functioning of the system on the sender or receiver sides. The quickest diagnosis was received only 18 minutes after sending our data. This is much faster than in conventional consultation using glass slides. There were no major problems relating to the diagnosis, although there were some minor differences of opinion between consultants. The results of questionnaires answered by many consultants confirmed the usability of this system for pathological consultation. (16 out of 23 consultants.) Conclusion We have developed a novel teledermatopathological consultation system using virtual slides, and investigated the usefulness of the system. The results demonstrate that our system can be a useful tool for international medical work, and we anticipate its wider application in the future. Virtual slides The virtual slides for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1902376044831574 PMID:23237667
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kiarashi, Nooshin; Nolte, Adam C.; Sturgeon, Gregory M.
Purpose: Physical phantoms are essential for the development, optimization, and evaluation of x-ray breast imaging systems. Recognizing the major effect of anatomy on image quality and clinical performance, such phantoms should ideally reflect the three-dimensional structure of the human breast. Currently, there is no commercially available three-dimensional physical breast phantom that is anthropomorphic. The authors present the development of a new suite of physical breast phantoms based on human data. Methods: The phantoms were designed to match the extended cardiac-torso virtual breast phantoms that were based on dedicated breast computed tomography images of human subjects. The phantoms were fabricated bymore » high-resolution multimaterial additive manufacturing (3D printing) technology. The glandular equivalency of the photopolymer materials was measured relative to breast tissue-equivalent plastic materials. Based on the current state-of-the-art in the technology and available materials, two variations were fabricated. The first was a dual-material phantom, the Doublet. Fibroglandular tissue and skin were represented by the most radiographically dense material available; adipose tissue was represented by the least radiographically dense material. The second variation, the Singlet, was fabricated with a single material to represent fibroglandular tissue and skin. It was subsequently filled with adipose-equivalent materials including oil, beeswax, and permanent urethane-based polymer. Simulated microcalcification clusters were further included in the phantoms via crushed eggshells. The phantoms were imaged and characterized visually and quantitatively. Results: The mammographic projections and tomosynthesis reconstructed images of the fabricated phantoms yielded realistic breast background. The mammograms of the phantoms demonstrated close correlation with simulated mammographic projection images of the corresponding virtual phantoms. Furthermore, power-law descriptions of the phantom images were in general agreement with real human images. The Singlet approach offered more realistic contrast as compared to the Doublet approach, but at the expense of air bubbles and air pockets that formed during the filling process. Conclusions: The presented physical breast phantoms and their matching virtual breast phantoms offer realistic breast anatomy, patient variability, and ease of use, making them a potential candidate for performing both system quality control testing and virtual clinical trials.« less
Spectral Analysis within the Virtual Observatory: The GAVO Service TheoSSA
NASA Astrophysics Data System (ADS)
Ringat, E.
2012-03-01
In the last decade, numerous Virtual Observatory organizations were established. One of these is the German Astrophysical Virtual Observatory (GAVO) that e.g. provides access to spectral energy distributions via the service TheoSSA. In a pilot phase, these are based on the Tübingen NLTE Model-Atmosphere Package (TMAP) and suitable for hot, compact stars. We demonstrate the power of TheoSSA in an application to the sdOB primary of AA Doradus by comparison with a “classical” spectral analysis.
WE-FG-207B-11: Objective Image Characterization of Spectral CT with a Dual-Layer Detector
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozguner, O; Halliburton, S; Dhanantwari, A
2016-06-15
Purpose: To obtain objective reference data for the spectral performance on a dual-layer detector CT platform (IQon, Philips) and compare virtual monoenergetic to conventional CT images. Methods: Scanning was performed using the hospital’s clinical adult body protocol: helical acquisition at 120kVp, with CTDIvol=15mGy. Multiple modules (591, 515, 528) of a CATPHAN 600 phantom and a 20 cm diameter cylindrical water phantom were scanned. No modifications to the standard protocol were necessary to enable spectral imaging. Both conventional and virtual monoenergetic images were generated from acquired data. Noise characteristics were assessed through Noise Power Spectra (NPS) and pixel standard deviation frommore » water phantom images. Spatial resolution was evaluated using Modulation Transfer Functions (MTF) of a tungsten wire as well as resolution bars. Low-contrast detectability was studied using contrast-to-noise ratio (CNR) of a low contrast object. Results: MTF curves of monoenergetic and conventional images were almost identical. MTF 50%, 10%, and 5% levels for monoenergetic images agreed with conventional images within 0.05lp/cm. These observations were verified by the resolution bars, which were clearly resolved at 7lp/cm but started blurring at 8lp/cm for this protocol in both conventional and 70 keV images. NPS curves indicated that, compared to conventional images, the noise power distribution of 70 keV monoenergetic images is similar (i.e. noise texture is similar) but exhibit a low frequency peak at keVs higher and lower than 70 keV. Standard deviation measurements show monoenergetic images have lower noise except at 40 keV where it is slightly higher. CNR of monoenergetic images is mostly flat across keV values and is superior to that of conventional images. Conclusion: Values for standard image quality metrics are the same or better for monoenergetic images compared to conventional images. Results indicate virtual monoenergetic images can be used without any loss in image quality or noise penalties relative to conventional images. This study was performed as part of a research agreement among Philips Healthcare, University Hospitals of Cleveland, and Case Western Reserve University.« less
2013-01-01
Background Virtual microscopy includes digitisation of histology slides and the use of computer technologies for complex investigation of diseases such as cancer. However, automated image analysis, or website publishing of such digital images, is hampered by their large file sizes. Results We have developed two Java based open source tools: Snapshot Creator and NDPI-Splitter. Snapshot Creator converts a portion of a large digital slide into a desired quality JPEG image. The image is linked to the patient’s clinical and treatment information in a customised open source cancer data management software (Caisis) in use at the Australian Breast Cancer Tissue Bank (ABCTB) and then published on the ABCTB website (http://www.abctb.org.au) using Deep Zoom open source technology. Using the ABCTB online search engine, digital images can be searched by defining various criteria such as cancer type, or biomarkers expressed. NDPI-Splitter splits a large image file into smaller sections of TIFF images so that they can be easily analysed by image analysis software such as Metamorph or Matlab. NDPI-Splitter also has the capacity to filter out empty images. Conclusions Snapshot Creator and NDPI-Splitter are novel open source Java tools. They convert digital slides into files of smaller size for further processing. In conjunction with other open source tools such as Deep Zoom and Caisis, this suite of tools is used for the management and archiving of digital microscopy images, enabling digitised images to be explored and zoomed online. Our online image repository also has the capacity to be used as a teaching resource. These tools also enable large files to be sectioned for image analysis. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/5330903258483934 PMID:23402499