Sample records for develop real-time image

  1. The design of real time infrared image generation software based on Creator and Vega

    NASA Astrophysics Data System (ADS)

    Wang, Rui-feng; Wu, Wei-dong; Huo, Jun-xiu

    2013-09-01

    Considering the requirement of high reality and real-time quality dynamic infrared image of an infrared image simulation, a method to design real-time infrared image simulation application on the platform of VC++ is proposed. This is based on visual simulation software Creator and Vega. The functions of Creator are introduced simply, and the main features of Vega developing environment are analyzed. The methods of infrared modeling and background are offered, the designing flow chart of the developing process of IR image real-time generation software and the functions of TMM Tool and MAT Tool and sensor module are explained, at the same time, the real-time of software is designed.

  2. Handheld real-time volumetric imaging of the spine: technology development.

    PubMed

    Tiouririne, Mohamed; Nguyen, Sarah; Hossack, John A; Owen, Kevin; William Mauldin, F

    2014-03-01

    Technical difficulties, poor image quality and reliance on pattern identifications represent some of the drawbacks of two-dimensional ultrasound imaging of spinal bone anatomy. To overcome these limitations, this study sought to develop real-time volumetric imaging of the spine using a portable handheld device. The device measured 19.2 cm × 9.2 cm × 9.0 cm and imaged at 5 MHz centre frequency. 2D imaging under conventional ultrasound and volumetric (3D) imaging in real time was achieved and verified by inspection using a custom spine phantom. Further device performance was assessed and revealed a 75-min battery life and an average frame rate of 17.7 Hz in volumetric imaging mode. The results suggest that real-time volumetric imaging of the spine is a feasible technique for more intuitive visualization of the spine. These results may have important ramifications for a large array of neuraxial procedures.

  3. Real-time computational photon-counting LiDAR

    NASA Astrophysics Data System (ADS)

    Edgar, Matthew; Johnson, Steven; Phillips, David; Padgett, Miles

    2018-03-01

    The availability of compact, low-cost, and high-speed MEMS-based spatial light modulators has generated widespread interest in alternative sampling strategies for imaging systems utilizing single-pixel detectors. The development of compressed sensing schemes for real-time computational imaging may have promising commercial applications for high-performance detectors, where the availability of focal plane arrays is expensive or otherwise limited. We discuss the research and development of a prototype light detection and ranging (LiDAR) system via direct time of flight, which utilizes a single high-sensitivity photon-counting detector and fast-timing electronics to recover millimeter accuracy three-dimensional images in real time. The development of low-cost real time computational LiDAR systems could have importance for applications in security, defense, and autonomous vehicles.

  4. Real-time microstructural and functional imaging and image processing in optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Westphal, Volker

    Optical Coherence Tomography (OCT) is a noninvasive optical imaging technique that allows high-resolution cross-sectional imaging of tissue microstructure, achieving a spatial resolution of about 10 mum. OCT is similar to B-mode ultrasound (US) except that it uses infrared light instead of ultrasound. In contrast to US, no coupling gel is needed, simplifying the image acquisition. Furthermore, the fiber optic implementation of OCT is compatible with endoscopes. In recent years, the transition from slow imaging, bench-top systems to real-time clinical systems has been under way. This has lead to a variety of applications, namely in ophthalmology, gastroenterology, dermatology and cardiology. First, this dissertation will demonstrate that OCT is capable of imaging and differentiating clinically relevant tissue structures in the gastrointestinal tract. A careful in vitro correlation study between endoscopic OCT images and corresponding histological slides was performed. Besides structural imaging, OCT systems were further developed for functional imaging, as for example to visualize blood flow. Previously, imaging flow in small vessels in real-time was not possible. For this research, a new processing scheme similar to real-time Doppler in US was introduced. It was implemented in dedicated hardware to allow real-time acquisition and overlayed display of blood flow in vivo. A sensitivity of 0.5mm/s was achieved. Optical coherence microscopy (OCM) is a variation of OCT, improving the resolution even further to a few micrometers. Advances made in the OCT scan engine for the Doppler setup enabled real-time imaging in vivo with OCM. In order to generate geometrical correct images for all the previous applications in real-time, extensive image processing algorithms were developed. Algorithms for correction of distortions due to non-telecentric scanning, nonlinear scan mirror movements, and refraction were developed and demonstrated. This has led to interesting new applications, as for example in imaging of the anterior segment of the eye.

  5. Real-time simulation of thermal shadows with EMIT

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Oberhofer, Stefan; Schätz, Peter; Nischwitz, Alfred; Obermeier, Paul

    2016-05-01

    Modern missile systems use infrared imaging for tracking or target detection algorithms. The development and validation processes of these missile systems need high fidelity simulations capable of stimulating the sensors in real-time with infrared image sequences from a synthetic 3D environment. The Extensible Multispectral Image Generation Toolset (EMIT) is a modular software library developed at MBDA Germany for the generation of physics-based infrared images in real-time. EMIT is able to render radiance images in full 32-bit floating point precision using state of the art computer graphics cards and advanced shader programs. An important functionality of an infrared image generation toolset is the simulation of thermal shadows as these may cause matching errors in tracking algorithms. However, for real-time simulations, such as hardware in the loop simulations (HWIL) of infrared seekers, thermal shadows are often neglected or precomputed as they require a thermal balance calculation in four-dimensions (3D geometry in one-dimensional time up to several hours in the past). In this paper we will show the novel real-time thermal simulation of EMIT. Our thermal simulation is capable of simulating thermal effects in real-time environments, such as thermal shadows resulting from the occlusion of direct and indirect irradiance. We conclude our paper with the practical use of EMIT in a missile HWIL simulation.

  6. Subcellular real-time in vivo imaging of intralymphatic and intravascular cancer-cell trafficking

    NASA Astrophysics Data System (ADS)

    McElroy, M.; Hayashi, K.; Kaushal, S.; Bouvet, M.; Hoffman, Robert M.

    2008-02-01

    With the use of fluorescent cells labeled with green fluorescent protein (GFP) in the nucleus and red fluorescent protein (RFP) in the cytoplasm and a highly sensitive small animal imaging system with both macro-optics and micro-optics, we have developed subcellular real-time imaging of cancer cell trafficking in live mice. Dual-color cancer cells were injected by a vascular route in an abdominal skin flap in nude mice. The mice were imaged with an Olympus OV100 small animal imaging system with a sensitive CCD camera and four objective lenses, parcentered and parfocal, enabling imaging from macrocellular to subcellular. We observed the nuclear and cytoplasmic behavior of cancer cells in real time in blood vessels as they moved by various means or adhered to the vessel surface in the abdominal skin flap. During extravasation, real-time dual-color imaging showed that cytoplasmic processes of the cancer cells exited the vessels first, with nuclei following along the cytoplasmic projections. Both cytoplasm and nuclei underwent deformation during extravasation. Different cancer cell lines seemed to strongly vary in their ability to extravasate. We have also developed real-time imaging of cancer cell trafficking in lymphatic vessels. Cancer cells labeled with GFP and/or RFP were injected into the inguinal lymph node of nude mice. The labeled cancer cells trafficked through lymphatic vessels where they were imaged via a skin flap in real-time at the cellular level until they entered the axillary lymph node. The bright dual-color fluorescence of the cancer cells and the real-time microscopic imaging capability of the Olympus OV100 enabled imaging the trafficking cancer cells in both blood vessels and lymphatics. With the dual-color cancer cells and the highly sensitive imaging system described here, the subcellular dynamics of cancer metastasis can now be observed in live mice in real time.

  7. Unprocessed real-time imaging of vitreoretinal surgical maneuvers using a microscope-integrated spectral-domain optical coherence tomography system.

    PubMed

    Hahn, Paul; Migacz, Justin; O'Connell, Rachelle; Izatt, Joseph A; Toth, Cynthia A

    2013-01-01

    We have recently developed a microscope-integrated spectral-domain optical coherence tomography (MIOCT) device towards intrasurgical cross-sectional imaging of surgical maneuvers. In this report, we explore the capability of MIOCT to acquire real-time video imaging of vitreoretinal surgical maneuvers without post-processing modifications. Standard 3-port vitrectomy was performed in human during scheduled surgery as well as in cadaveric porcine eyes. MIOCT imaging of human subjects was performed in healthy normal volunteers and intraoperatively at a normal pause immediately following surgical manipulations, under an Institutional Review Board-approved protocol, with informed consent from all subjects. Video MIOCT imaging of live surgical manipulations was performed in cadaveric porcine eyes by carefully aligning B-scans with instrument orientation and movement. Inverted imaging was performed by lengthening of the reference arm to a position beyond the choroid. Unprocessed MIOCT imaging was successfully obtained in healthy human volunteers and in human patients undergoing surgery, with visualization of post-surgical changes in unprocessed single B-scans. Real-time, unprocessed MIOCT video imaging was successfully obtained in cadaveric porcine eyes during brushing of the retina with the Tano scraper, peeling of superficial retinal tissue with intraocular forceps, and separation of the posterior hyaloid face. Real-time inverted imaging enabled imaging without complex conjugate artifacts. MIOCT is capable of unprocessed imaging of the macula in human patients undergoing surgery and of unprocessed, real-time, video imaging of surgical maneuvers in model eyes. These capabilities represent an important step towards development of MIOCT for efficient, real-time imaging of manipulations during human surgery.

  8. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fahimian, B.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  9. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  10. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berbeco, R.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  11. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Keall, P.

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  12. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniquesmore » for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.« less

  13. Quasi real-time analysis of mixed-phase clouds using interferometric out-of-focus imaging: development of an algorithm to assess liquid and ice water content

    NASA Astrophysics Data System (ADS)

    Lemaitre, P.; Brunel, M.; Rondeau, A.; Porcheron, E.; Gréhan, G.

    2015-12-01

    According to changes in aircraft certifications rules, instrumentation has to be developed to alert the flight crews of potential icing conditions. The technique developed needs to measure in real time the amount of ice and liquid water encountered by the plane. Interferometric imaging offers an interesting solution: It is currently used to measure the size of regular droplets, and it can further measure the size of irregular particles from the analysis of their speckle-like out-of-focus images. However, conventional image processing needs to be speeded up to be compatible with the real-time detection of icing conditions. This article presents the development of an optimised algorithm to accelerate image processing. The algorithm proposed is based on the detection of each interferogram with the use of the gradient pair vector method. This method is shown to be 13 times faster than the conventional Hough transform. The algorithm is validated on synthetic images of mixed phase clouds, and finally tested and validated in laboratory conditions. This algorithm should have important applications in the size measurement of droplets and ice particles for aircraft safety, cloud microphysics investigation, and more generally in the real-time analysis of triphasic flows using interferometric particle imaging.

  14. Interactive brain shift compensation using GPU based programming

    NASA Astrophysics Data System (ADS)

    van der Steen, Sander; Noordmans, Herke Jan; Verdaasdonk, Rudolf

    2009-02-01

    Processing large images files or real-time video streams requires intense computational power. Driven by the gaming industry, the processing power of graphic process units (GPUs) has increased significantly. With the pixel shader model 4.0 the GPU can be used for image processing 10x faster than the CPU. Dedicated software was developed to deform 3D MR and CT image sets for real-time brain shift correction during navigated neurosurgery using landmarks or cortical surface traces defined by the navigation pointer. Feedback was given using orthogonal slices and an interactively raytraced 3D brain image. GPU based programming enables real-time processing of high definition image datasets and various applications can be developed in medicine, optics and image sciences.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shoaf, S.; APS Engineering Support Division

    A real-time image analysis system was developed for beam imaging diagnostics. An Apple Power Mac G5 with an Active Silicon LFG frame grabber was used to capture video images that were processed and analyzed. Software routines were created to utilize vector-processing hardware to reduce the time to process images as compared to conventional methods. These improvements allow for more advanced image processing diagnostics to be performed in real time.

  16. Application of linear array imaging techniques to the real-time inspection of airframe structures and substructures

    NASA Technical Reports Server (NTRS)

    Miller, James G.

    1995-01-01

    Development and application of linear array imaging technologies to address specific aging-aircraft inspection issues is described. Real-time video-taped images were obtained from an unmodified commercial linear-array medical scanner of specimens constructed to simulate typical types of flaws encountered in the inspection of aircraft structures. Results suggest that information regarding the characteristics, location, and interface properties of specific types of flaws in materials and structures may be obtained from the images acquired with a linear array. Furthermore, linear array imaging may offer the advantage of being able to compare 'good' regions with 'flawed' regions simultaneously, and in real time. Real-time imaging permits the inspector to obtain image information from various views and provides the opportunity for observing the effects of introducing specific interventions. Observation of an image in real-time can offer the operator the ability to 'interact' with the inspection process, thus providing new capabilities, and perhaps, new approaches to nondestructive inspections.

  17. Real Time Target Tracking in a Phantom Using Ultrasonic Imaging

    NASA Astrophysics Data System (ADS)

    Xiao, X.; Corner, G.; Huang, Z.

    In this paper we present a real-time ultrasound image guidance method suitable for tracking the motion of tumors. A 2D ultrasound based motion tracking system was evaluated. A robot was used to control the focused ultrasound and position it at the target that has been segmented from a real-time ultrasound video. Tracking accuracy and precision were investigated using a lesion mimicking phantom. Experiments have been conducted and results show sufficient efficiency of the image guidance algorithm. This work could be developed as the foundation for combining the real time ultrasound imaging tracking and MRI thermometry monitoring non-invasive surgery.

  18. Real-time three-dimensional optical coherence tomography image-guided core-needle biopsy system.

    PubMed

    Kuo, Wei-Cheng; Kim, Jongsik; Shemonski, Nathan D; Chaney, Eric J; Spillman, Darold R; Boppart, Stephen A

    2012-06-01

    Advances in optical imaging modalities, such as optical coherence tomography (OCT), enable us to observe tissue microstructure at high resolution and in real time. Currently, core-needle biopsies are guided by external imaging modalities such as ultrasound imaging and x-ray computed tomography (CT) for breast and lung masses, respectively. These image-guided procedures are frequently limited by spatial resolution when using ultrasound imaging, or by temporal resolution (rapid real-time feedback capabilities) when using x-ray CT. One feasible approach is to perform OCT within small gauge needles to optically image tissue microstructure. However, to date, no system or core-needle device has been developed that incorporates both three-dimensional OCT imaging and tissue biopsy within the same needle for true OCT-guided core-needle biopsy. We have developed and demonstrate an integrated core-needle biopsy system that utilizes catheter-based 3-D OCT for real-time image-guidance for target tissue localization, imaging of tissue immediately prior to physical biopsy, and subsequent OCT imaging of the biopsied specimen for immediate assessment at the point-of-care. OCT images of biopsied ex vivo tumor specimens acquired during core-needle placement are correlated with corresponding histology, and computational visualization of arbitrary planes within the 3-D OCT volumes enables feedback on specimen tissue type and biopsy quality. These results demonstrate the potential for using real-time 3-D OCT for needle biopsy guidance by imaging within the needle and tissue during biopsy procedures.

  19. GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array

    PubMed Central

    Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.

    2014-01-01

    Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080

  20. Real-time quantitative fluorescence imaging using a single snapshot optical properties technique for neurosurgical guidance

    NASA Astrophysics Data System (ADS)

    Valdes, Pablo A.; Angelo, Joseph; Gioux, Sylvain

    2015-03-01

    Fluorescence imaging has shown promise as an adjunct to improve the extent of resection in neurosurgery and oncologic surgery. Nevertheless, current fluorescence imaging techniques do not account for the heterogeneous attenuation effects of tissue optical properties. In this work, we present a novel imaging system that performs real time quantitative fluorescence imaging using Single Snapshot Optical Properties (SSOP) imaging. We developed the technique and performed initial phantom studies to validate the quantitative capabilities of the system for intraoperative feasibility. Overall, this work introduces a novel real-time quantitative fluorescence imaging method capable of being used intraoperatively for neurosurgical guidance.

  1. Real-time intraoperative fluorescence imaging system using light-absorption correction.

    PubMed

    Themelis, George; Yoo, Jung Sun; Soh, Kwang-Sup; Schulz, Ralf; Ntziachristos, Vasilis

    2009-01-01

    We present a novel fluorescence imaging system developed for real-time interventional imaging applications. The system implements a correction scheme that improves the accuracy of epi-illumination fluorescence images for light intensity variation in tissues. The implementation is based on the use of three cameras operating in parallel, utilizing a common lens, which allows for the concurrent collection of color, fluorescence, and light attenuation images at the excitation wavelength from the same field of view. The correction is based on a ratio approach of fluorescence over light attenuation images. Color images and video is used for surgical guidance and for registration with the corrected fluorescence images. We showcase the performance metrics of this system on phantoms and animals, and discuss the advantages over conventional epi-illumination systems developed for real-time applications and the limits of validity of corrected epi-illumination fluorescence imaging.

  2. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    PubMed

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  3. Real-time Graphics Processing Unit Based Fourier Domain Optical Coherence Tomography and Surgical Applications

    NASA Astrophysics Data System (ADS)

    Zhang, Kang

    2011-12-01

    In this dissertation, real-time Fourier domain optical coherence tomography (FD-OCT) capable of multi-dimensional micrometer-resolution imaging targeted specifically for microsurgical intervention applications was developed and studied. As a part of this work several ultra-high speed real-time FD-OCT imaging and sensing systems were proposed and developed. A real-time 4D (3D+time) OCT system platform using the graphics processing unit (GPU) to accelerate OCT signal processing, the imaging reconstruction, visualization, and volume rendering was developed. Several GPU based algorithms such as non-uniform fast Fourier transform (NUFFT), numerical dispersion compensation, and multi-GPU implementation were developed to improve the impulse response, SNR roll-off and stability of the system. Full-range complex-conjugate-free FD-OCT was also implemented on the GPU architecture to achieve doubled image range and improved SNR. These technologies overcome the imaging reconstruction and visualization bottlenecks widely exist in current ultra-high speed FD-OCT systems and open the way to interventional OCT imaging for applications in guided microsurgery. A hand-held common-path optical coherence tomography (CP-OCT) distance-sensor based microsurgical tool was developed and validated. Through real-time signal processing, edge detection and feed-back control, the tool was shown to be capable of track target surface and compensate motion. The micro-incision test using a phantom was performed using a CP-OCT-sensor integrated hand-held tool, which showed an incision error less than +/-5 microns, comparing to >100 microns error by free-hand incision. The CP-OCT distance sensor has also been utilized to enhance the accuracy and safety of optical nerve stimulation. Finally, several experiments were conducted to validate the system for surgical applications. One of them involved 4D OCT guided micro-manipulation using a phantom. Multiple volume renderings of one 3D data set were performed with different view angles to allow accurate monitoring of the micro-manipulation, and the user to clearly monitor tool-to-target spatial relation in real-time. The system was also validated by imaging multiple biological samples, such as human fingerprint, human cadaver head and small animals. Compared to conventional surgical microscopes, GPU-based real-time FD-OCT can provide the surgeons with a real-time comprehensive spatial view of the microsurgical region and accurate depth perception.

  4. Interactive real time flow simulations

    NASA Technical Reports Server (NTRS)

    Sadrehaghighi, I.; Tiwari, S. N.

    1990-01-01

    An interactive real time flow simulation technique is developed for an unsteady channel flow. A finite-volume algorithm in conjunction with a Runge-Kutta time stepping scheme was developed for two-dimensional Euler equations. A global time step was used to accelerate convergence of steady-state calculations. A raster image generation routine was developed for high speed image transmission which allows the user to have direct interaction with the solution development. In addition to theory and results, the hardware and software requirements are discussed.

  5. Expanding the spectrum: 20 years of advances in MMW imagery

    NASA Astrophysics Data System (ADS)

    Martin, Christopher A.; Lovberg, John A.; Kolinko, Valdimir G.

    2017-05-01

    Millimeter-wave imaging has expanded from the single-pixel swept imagers developed in the 1960s to large field-ofview real-time systems in use today. Trex Enterprises has been developing millimeter-wave imagers since 1991 for aviation and security applications, as well as millimeter-wave communications devices. As MMIC device development was stretching into the MMW band in the 1990s, Trex developed novel imaging architectures to create 2-D staring systems with large pixel counts and no moving parts while using a minimal number of devices. Trex also contributed to the device development in amplifiers, switches, and detectors to enable the next generation of passive MMW imaging systems. The architectures and devices developed continue to be employed in security imagers, radar, and radios produced by Trex. This paper reviews the development of the initial real-time MMW imagers and associated devices by Trex Enterprises from the 1990s through the 2000s. The devices include W-band MMIC amplifiers, switches, and detector didoes, and MMW circuit boards and optical processors. The imaging systems discussed include two different real-time passive MMW imagers flown on helicopters and a MMW radar system, as well as implementation of the devices and architectures in simpler stand-off and gateway security imagers.

  6. Early development in synthetic aperture lidar sensing and processing for on-demand high resolution imaging

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Turbide, Simon; Terroux, Marc; Marchese, Linda; Harnisch, Bernd

    2017-11-01

    The quest for real-time high resolution is of prime importance for surveillance applications specially in disaster management and rescue mission. Synthetic aperture radar provides meter-range resolution images in all weather conditions. Often installed on satellites the revisit time can be too long to support real-time operations on the ground. Synthetic aperture lidar can be lightweight and offers centimeter-range resolution. Onboard airplane or unmanned air vehicle this technology would allow for timelier reconnaissance. INO has developed a synthetic aperture radar table prototype and further used a real-time optronic processor to fulfill image generation on-demand. The early positive results using both technologies are presented in this paper.

  7. Development and Evaluation of Real-Time Volumetric Compton Gamma-Ray Imaging

    NASA Astrophysics Data System (ADS)

    Barnowski, Ross Wegner

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. The real-time tracking allows the imager to be moved throughout the environment or around a particular object of interest, obtaining the multiple perspectives necessary for standoff 3D imaging. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, can be incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and two different mobile gamma-ray imaging platforms. The first is a cart-based imaging platform known as the Volumetric Compton Imager (VCI), comprising two 3D position-sensitive high purity germanium (HPGe) detectors, exhibiting excellent gamma-ray imaging characteristics, but with limited mobility due to the size and weight of the cart. The second system is the High Efficiency Multimodal Imager (HEMI) a hand-portable gamma-ray imager comprising 96 individual cm3 CdZnTe crystals arranged in a two-plane, active-mask configuration. The HEMI instrument has poorer energy and angular resolution than the VCI, but is truly hand-portable, allowing the SDF concept to be tested in multiple environments and for more challenging imaging scenarios. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. Each of the two mobile imaging systems are used to demonstrate SDF for a variety of scenarios, including general search and mapping scenarios with several point gamma-ray sources over the range of energies relevant for Compton imaging. More specific imaging scenarios are also addressed, including directed search and object interrogation scenarios. Finally, the volumetric image quality is quantitatively investigated with respect to the number of Compton events acquired during a measurement, the list-mode uncertainty of the Compton cone data, and the uncertainty in the pose estimate from the real-time tracking algorithm. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractability of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  8. Real-Time Noise Removal for Line-Scanning Hyperspectral Devices Using a Minimum Noise Fraction-Based Approach

    PubMed Central

    Bjorgan, Asgeir; Randeberg, Lise Lyngsnes

    2015-01-01

    Processing line-by-line and in real-time can be convenient for some applications of line-scanning hyperspectral imaging technology. Some types of processing, like inverse modeling and spectral analysis, can be sensitive to noise. The MNF (minimum noise fraction) transform provides suitable denoising performance, but requires full image availability for the estimation of image and noise statistics. In this work, a modified algorithm is proposed. Incrementally-updated statistics enables the algorithm to denoise the image line-by-line. The denoising performance has been compared to conventional MNF and found to be equal. With a satisfying denoising performance and real-time implementation, the developed algorithm can denoise line-scanned hyperspectral images in real-time. The elimination of waiting time before denoised data are available is an important step towards real-time visualization of processed hyperspectral data. The source code can be found at http://www.github.com/ntnu-bioopt/mnf. This includes an implementation of conventional MNF denoising. PMID:25654717

  9. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    NASA Astrophysics Data System (ADS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin, Vibin

    2008-09-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  10. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-09-26

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CTmore » image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.« less

  11. Design of teleoperation system with a force-reflecting real-time simulator

    NASA Technical Reports Server (NTRS)

    Hirata, Mitsunori; Sato, Yuichi; Nagashima, Fumio; Maruyama, Tsugito

    1994-01-01

    We developed a force-reflecting teleoperation system that uses a real-time graphic simulator. This system eliminates the effects of communication time delays in remote robot manipulation. The simulator provides the operator with predictive display and feedback of computed contact forces through a six-degree of freedom (6-DOF) master arm on a real-time basis. With this system, peg-in-hole tasks involving round-trip communication time delays of up to a few seconds were performed at three support levels: a real image alone, a predictive display with a real image, and a real-time graphic simulator with computed-contact-force reflection and a predictive display. The experimental results indicate the best teleoperation efficiency was achieved by using the force-reflecting simulator with two images. The shortest work time, lowest sensor maximum, and a 100 percent success rate were obtained. These results demonstrate the effectiveness of simulated-force-reflecting teleoperation efficiency.

  12. Visualisation and quantitative analysis of the rodent malaria liver stage by real time imaging.

    PubMed

    Ploemen, Ivo H J; Prudêncio, Miguel; Douradinha, Bruno G; Ramesar, Jai; Fonager, Jannik; van Gemert, Geert-Jan; Luty, Adrian J F; Hermsen, Cornelus C; Sauerwein, Robert W; Baptista, Fernanda G; Mota, Maria M; Waters, Andrew P; Que, Ivo; Lowik, Clemens W G M; Khan, Shahid M; Janse, Chris J; Franke-Fayard, Blandine M D

    2009-11-18

    The quantitative analysis of Plasmodium development in the liver in laboratory animals in cultured cells is hampered by low parasite infection rates and the complicated methods required to monitor intracellular development. As a consequence, this important phase of the parasite's life cycle has been poorly studied compared to blood stages, for example in screening anti-malarial drugs. Here we report the use of a transgenic P. berghei parasite, PbGFP-Luc(con), expressing the bioluminescent reporter protein luciferase to visualize and quantify parasite development in liver cells both in culture and in live mice using real-time luminescence imaging. The reporter-parasite based quantification in cultured hepatocytes by real-time imaging or using a microplate reader correlates very well with established quantitative RT-PCR methods. For the first time the liver stage of Plasmodium is visualized in whole bodies of live mice and we were able to discriminate as few as 1-5 infected hepatocytes per liver in mice using 2D-imaging and to identify individual infected hepatocytes by 3D-imaging. The analysis of liver infections by whole body imaging shows a good correlation with quantitative RT-PCR analysis of extracted livers. The luminescence-based analysis of the effects of various drugs on in vitro hepatocyte infection shows that this method can effectively be used for in vitro screening of compounds targeting Plasmodium liver stages. Furthermore, by analysing the effect of primaquine and tafenoquine in vivo we demonstrate the applicability of real time imaging to assess parasite drug sensitivity in the liver. The simplicity and speed of quantitative analysis of liver-stage development by real-time imaging compared to the PCR methodologies, as well as the possibility to analyse liver development in live mice without surgery, opens up new possibilities for research on Plasmodium liver infections and for validating the effect of drugs and vaccines on the liver stage of Plasmodium.

  13. Visualisation and Quantitative Analysis of the Rodent Malaria Liver Stage by Real Time Imaging

    PubMed Central

    Douradinha, Bruno G.; Ramesar, Jai; Fonager, Jannik; van Gemert, Geert-Jan; Luty, Adrian J. F.; Hermsen, Cornelus C.; Sauerwein, Robert W.; Baptista, Fernanda G.; Mota, Maria M.; Waters, Andrew P.; Que, Ivo; Lowik, Clemens W. G. M.; Khan, Shahid M.; Janse, Chris J.; Franke-Fayard, Blandine M. D.

    2009-01-01

    The quantitative analysis of Plasmodium development in the liver in laboratory animals in cultured cells is hampered by low parasite infection rates and the complicated methods required to monitor intracellular development. As a consequence, this important phase of the parasite's life cycle has been poorly studied compared to blood stages, for example in screening anti-malarial drugs. Here we report the use of a transgenic P. berghei parasite, PbGFP-Luccon, expressing the bioluminescent reporter protein luciferase to visualize and quantify parasite development in liver cells both in culture and in live mice using real-time luminescence imaging. The reporter-parasite based quantification in cultured hepatocytes by real-time imaging or using a microplate reader correlates very well with established quantitative RT-PCR methods. For the first time the liver stage of Plasmodium is visualized in whole bodies of live mice and we were able to discriminate as few as 1–5 infected hepatocytes per liver in mice using 2D-imaging and to identify individual infected hepatocytes by 3D-imaging. The analysis of liver infections by whole body imaging shows a good correlation with quantitative RT-PCR analysis of extracted livers. The luminescence-based analysis of the effects of various drugs on in vitro hepatocyte infection shows that this method can effectively be used for in vitro screening of compounds targeting Plasmodium liver stages. Furthermore, by analysing the effect of primaquine and tafenoquine in vivo we demonstrate the applicability of real time imaging to assess parasite drug sensitivity in the liver. The simplicity and speed of quantitative analysis of liver-stage development by real-time imaging compared to the PCR methodologies, as well as the possibility to analyse liver development in live mice without surgery, opens up new possibilities for research on Plasmodium liver infections and for validating the effect of drugs and vaccines on the liver stage of Plasmodium. PMID:19924309

  14. Fast interactive real-time volume rendering of real-time three-dimensional echocardiography: an implementation for low-end computers

    NASA Technical Reports Server (NTRS)

    Saracino, G.; Greenberg, N. L.; Shiota, T.; Corsi, C.; Lamberti, C.; Thomas, J. D.

    2002-01-01

    Real-time three-dimensional echocardiography (RT3DE) is an innovative cardiac imaging modality. However, partly due to lack of user-friendly software, RT3DE has not been widely accepted as a clinical tool. The object of this study was to develop and implement a fast and interactive volume renderer of RT3DE datasets designed for a clinical environment where speed and simplicity are not secondary to accuracy. Thirty-six patients (20 regurgitation, 8 normal, 8 cardiomyopathy) were imaged using RT3DE. Using our newly developed software, all 3D data sets were rendered in real-time throughout the cardiac cycle and assessment of cardiac function and pathology was performed for each case. The real-time interactive volume visualization system is user friendly and instantly provides consistent and reliable 3D images without expensive workstations or dedicated hardware. We believe that this novel tool can be used clinically for dynamic visualization of cardiac anatomy.

  15. A Real-Time Imaging System for Stereo Atomic Microscopy at SPring-8's BL25SU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsushita, Tomohiro; Guo, Fang Zhun; Muro, Takayuki

    2007-01-19

    We have developed a real-time photoelectron angular distribution (PEAD) and Auger-electron angular distribution (AEAD) imaging system at SPring-8 BL25SU, Japan. In addition, a real-time imaging system for circular dichroism (CD) studies of PEAD/AEAD has been newly developed. Two PEAD images recorded with left- and right-circularly polarized light can be regarded as a stereo image of the atomic arrangement. A two-dimensional display type mirror analyzer (DIANA) has been installed at the beamline, making it possible to record PEAD/AEAD patterns with an acceptance angle of {+-}60 deg. in real-time. The twin-helical undulators at BL25SU enable helicity switching of the circularly polarized lightmore » at 10Hz, 1Hz or 0.1Hz. In order to realize real-time measurements of the CD of the PEAD/AEAD, the CCD camera must be synchronized to the switching frequency. The VME computer that controls the ID is connected to the measurement computer with two BNC cables, and the helicity information is sent using TTL signals. For maximum flexibility, rather than using a hardware shutter synchronizing with the TTL signal we have developed software to synchronize the CCD shutter with the TTL signal. We have succeeded in synchronizing the CCD camera in both the 1Hz and 0.1Hz modes.« less

  16. Demonstrating the Value of Near Real-time Satellite-based Earth Observations in a Research and Education Framework

    NASA Astrophysics Data System (ADS)

    Chiu, L.; Hao, X.; Kinter, J. L.; Stearn, G.; Aliani, M.

    2017-12-01

    The launch of GOES-16 series provides an opportunity to advance near real-time applications in natural hazard detection, monitoring and warning. This study demonstrates the capability and values of receiving real-time satellite-based Earth observations over a fast terrestrial networks and processing high-resolution remote sensing data in a university environment. The demonstration system includes 4 components: 1) Near real-time data receiving and processing; 2) data analysis and visualization; 3) event detection and monitoring; and 4) information dissemination. Various tools are developed and integrated to receive and process GRB data in near real-time, produce images and value-added data products, and detect and monitor extreme weather events such as hurricane, fire, flooding, fog, lightning, etc. A web-based application system is developed to disseminate near-real satellite images and data products. The images are generated with GIS-compatible format (GeoTIFF) to enable convenient use and integration in various GIS platforms. This study enhances the capacities for undergraduate and graduate education in Earth system and climate sciences, and related applications to understand the basic principles and technology in real-time applications with remote sensing measurements. It also provides an integrated platform for near real-time monitoring of extreme weather events, which are helpful for various user communities.

  17. Image Understanding Architecture

    DTIC Science & Technology

    1991-09-01

    architecture to support real-time, knowledge -based image understanding , and develop the software support environment that will be needed to utilize...NUMBER OF PAGES Image Understanding Architecture, Knowledge -Based Vision, AI Real-Time Computer Vision, Software Simulator, Parallel Processor IL PRICE... information . In addition to sensory and knowledge -based processing it is useful to introduce a level of symbolic processing. Thus, vision researchers

  18. Development of a real-time digital radiography system using a scintillator-type flat-panel detector

    NASA Astrophysics Data System (ADS)

    Ikeda, Shigeyuki; Suzuki, Katsumi; Ishikawa, Ken; Okajima, Kenichi

    2001-06-01

    In order to study the advantage and remaining problems of FPD (flat panel detector) for clinical use by the real-time DR (digital radiography) system, we developed a prototype system using a scintillator type FPD and which was compared with previous I.I.-CCD type real-time DR. We replaced the X- ray detector of DR-2000X from I.I.-4M (4 million pixels)-CCD camera to the scintillator type dynamic FPD(7' X 9', 127 micrometers ), which can take both radiographic and fluoroscopic images. We obtained the images of head and stomach phantoms, and discussed about the image quality with medical doctors.

  19. The Real-Time Monitoring Service Platform for Land Supervision Based on Cloud Integration

    NASA Astrophysics Data System (ADS)

    Sun, J.; Mao, M.; Xiang, H.; Wang, G.; Liang, Y.

    2018-04-01

    Remote sensing monitoring has become the important means for land and resources departments to strengthen supervision. Aiming at the problems of low monitoring frequency and poor data currency in current remote sensing monitoring, this paper researched and developed the cloud-integrated real-time monitoring service platform for land supervision which enhanced the monitoring frequency by acquiring the domestic satellite image data overall and accelerated the remote sensing image data processing efficiency by exploiting the intelligent dynamic processing technology of multi-source images. Through the pilot application in Jinan Bureau of State Land Supervision, it has been proved that the real-time monitoring technical method for land supervision is feasible. In addition, the functions of real-time monitoring and early warning are carried out on illegal land use, permanent basic farmland protection and boundary breakthrough in urban development. The application has achieved remarkable results.

  20. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.

    PubMed

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-09-01

    To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.

  1. Real-Time Confocal Imaging Of The Living Eye

    NASA Astrophysics Data System (ADS)

    Jester, James V.; Cavanagh, H. Dwight; Essepian, John; Shields, William J.; Lemp, Michael A.

    1989-12-01

    In 1986, we adapted the Tandem Scanning Reflected Light Microscope of Petran and Hadraysky to permit non-invasive, confocal imaging of the living eye in real-time. We were first to obtain stable, confocal optical sections in vivo, from human and animal eyes. Using confocal imaging systems we have now studied living, normal volunteers, rabbits, cats and primates sequentially, non-invasively, and in real-time. The continued development of real-time confocal imaging systems will unlock the door to a new field of cell biology involving for the first time the study of dynamic cellular processes in living organ systems. Towards this end we have concentrated our initial studies on three areas (1) evaluation of confocal microscope systems for real-time image acquisition, (2) studies of the living normal cornea (epithelium, stroma, endothelium) in human and other species; and (3) sequential wound-healing responses in the cornea in single animals to lamellar-keratectomy injury (cellular migration, inflammation, scarring). We believe that this instrument represents an important, new paradigm for research in cell biology and pathology and that it will fundamentally alter all experimental and clinical approaches in future years.

  2. Real-Time Aggressive Image Data Compression

    DTIC Science & Technology

    1990-03-31

    implemented with higher degrees of modularity, concurrency, and higher levels of machine intelligence , thereby providing higher data -throughput rates...Project Summary Project Title: Real-Time Aggressive Image Data Compression Principal Investigators: Dr. Yih-Fang Huang and Dr. Ruey-wen Liu Institution...Summary The objective of the proposed research is to develop reliable algorithms !.hat can achieve aggressive image data compression (with a compression

  3. A computational approach to real-time image processing for serial time-encoded amplified microscopy

    NASA Astrophysics Data System (ADS)

    Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi

    2016-03-01

    High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.

  4. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    NASA Astrophysics Data System (ADS)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  5. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  6. Photoacoustic image-guided navigation system for surgery (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Park, Sara; Jang, Jongseong; Kim, Jeesu; Kim, Young Soo; Kim, Chulhong

    2017-03-01

    Identifying and delineating invisible anatomical and pathological details during surgery guides surgical procedures in real time. Various intraoperative imaging modalities have been increasingly employed to minimize such surgical risks as anatomical changes, damage to normal tissues, and human error. However, current methods provide only structural information, which cannot identify critical structures such as blood vessels. The logical next step is an intraoperative imaging modality that can provide functional information. Here, we have successfully developed a photoacoustic (PA) image-guided navigation system for surgery by integrating a position tracking system and a real-time clinical photoacoustic/ultrasound (PA/US) imaging system. PA/US images were acquired in real time and overlaid on pre-acquired cross-sectional magnetic resonance (MR) images. In the overlaid images, PA images represent the optical absorption characteristics of the surgical field, while US and MR images represent the morphological structure of surrounding tissues. To test the feasibility of the system, we prepared a tissue mimicking phantom which contained two samples, methylene blue as a contrast agent and water as a control. We acquired real-time overlaid PA/US/MR images of the phantom, which were well-matched with the optical and morphological properties of the samples. The developed system is the first approach to a novel intraoperative imaging technology based on PA imaging, and we believe that the system can be utilized in various surgical environments in the near future, improving the efficacy of surgical guidance.

  7. Non-Cartesian Balanced SSFP Pulse Sequences for Real-Time Cardiac MRI

    PubMed Central

    Feng, Xue; Salerno, Michael; Kramer, Christopher M.; Meyer, Craig H.

    2015-01-01

    Purpose To develop a new spiral-in/out balanced steady-state free precession (bSSFP) pulse sequence for real-time cardiac MRI and compare it with radial and spiral-out techniques. Methods Non-Cartesian sampling strategies are efficient and robust to motion and thus have important advantages for real-time bSSFP cine imaging. This study describes a new symmetric spiral-in/out sequence with intrinsic gradient moment compensation and SSFP refocusing at TE=TR/2. In-vivo real-time cardiac imaging studies were performed to compare radial, spiral-out, and spiral-in/out bSSFP pulse sequences. Furthermore, phase-based fat-water separation taking advantage of the refocusing mechanism of the spiral-in/out bSSFP sequence was also studied. Results The image quality of the spiral-out and spiral-in/out bSSFP sequences was improved with off-resonance and k-space trajectory correction. The spiral-in/out bSSFP sequence had the highest SNR, CNR, and image quality ratings, with spiral-out bSSFP sequence second in each category and the radial bSSFP sequence third. The spiral-in/out bSSFP sequence provides separated fat and water images with no additional scan time. Conclusions In this work a new spiral-in/out bSSFP sequence was developed and tested. The superiority of spiral bSSFP sequences over the radial bSSFP sequence in terms of SNR and reduced artifacts was demonstrated in real-time MRI of cardiac function without image acceleration. PMID:25960254

  8. Real-time inspection by submarine images

    NASA Astrophysics Data System (ADS)

    Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe

    1996-10-01

    A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.

  9. Magneto-optical system for high speed real time imaging.

    PubMed

    Baziljevich, M; Barness, D; Sinvani, M; Perel, E; Shaulov, A; Yeshurun, Y

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  10. Magneto-optical system for high speed real time imaging

    NASA Astrophysics Data System (ADS)

    Baziljevich, M.; Barness, D.; Sinvani, M.; Perel, E.; Shaulov, A.; Yeshurun, Y.

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  11. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    PubMed

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. MR-guided endovascular interventions: a comprehensive review on techniques and applications.

    PubMed

    Kos, Sebastian; Huegli, Rolf; Bongartz, Georg M; Jacob, Augustinus L; Bilecen, Deniz

    2008-04-01

    The magnetic resonance (MR) guidance of endovascular interventions is probably one of the greatest challenges of clinical MR research. MR angiography is not only an imaging tool for the vasculature but can also simultaneously depict high tissue contrast, including the differentiation of the vascular wall and perivascular tissues, as well as vascular function. Several hurdles had to be overcome to allow MR guidance for endovascular interventions. MR hardware and sequence design had to be developed to achieve acceptable patient access and to allow real-time or near real-time imaging. The development of interventional devices, both applicable and safe for MR imaging (MRI), was also mandatory. The subject of this review is to summarize the latest developments in real-time MRI hardware, MRI, visualization tools, interventional devices, endovascular tracking techniques, actual applications and safety issues.

  13. Real-time chirp-coded imaging with a programmable ultrasound biomicroscope.

    PubMed

    Bosisio, Mattéo R; Hasquenoph, Jean-Michel; Sandrin, Laurent; Laugier, Pascal; Bridal, S Lori; Yon, Sylvain

    2010-03-01

    Ultrasound biomicroscopy (UBM) of mice can provide a testing ground for new imaging strategies. The UBM system presented in this paper facilitates the development of imaging and measurement methods with programmable design, arbitrary waveform coding, broad bandwidth (2-80 MHz), digital filtering, programmable processing, RF data acquisition, multithread/multicore real-time display, and rapid mechanical scanning (

  14. Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery

    PubMed Central

    Rottmann, Joerg; Keall, Paul; Berbeco, Ross

    2013-01-01

    Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146

  15. Real-Time Imaging System for the OpenPET

    NASA Astrophysics Data System (ADS)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  16. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bednarz, B; Culberson, W; Bassetti, M

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-timemore » from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.« less

  17. Real-time high dynamic range laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.

    2016-04-01

    In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging.

  18. A design of real time image capturing and processing system using Texas Instrument's processor

    NASA Astrophysics Data System (ADS)

    Wee, Toon-Joo; Chaisorn, Lekha; Rahardja, Susanto; Gan, Woon-Seng

    2007-09-01

    In this work, we developed and implemented an image capturing and processing system that equipped with capability of capturing images from an input video in real time. The input video can be a video from a PC, video camcorder or DVD player. We developed two modes of operation in the system. In the first mode, an input image from the PC is processed on the processing board (development platform with a digital signal processor) and is displayed on the PC. In the second mode, current captured image from the video camcorder (or from DVD player) is processed on the board but is displayed on the LCD monitor. The major difference between our system and other existing conventional systems is that image-processing functions are performed on the board instead of the PC (so that the functions can be used for further developments on the board). The user can control the operations of the board through the Graphic User Interface (GUI) provided on the PC. In order to have a smooth image data transfer between the PC and the board, we employed Real Time Data Transfer (RTDX TM) technology to create a link between them. For image processing functions, we developed three main groups of function: (1) Point Processing; (2) Filtering and; (3) 'Others'. Point Processing includes rotation, negation and mirroring. Filter category provides median, adaptive, smooth and sharpen filtering in the time domain. In 'Others' category, auto-contrast adjustment, edge detection, segmentation and sepia color are provided, these functions either add effect on the image or enhance the image. We have developed and implemented our system using C/C# programming language on TMS320DM642 (or DM642) board from Texas Instruments (TI). The system was showcased in College of Engineering (CoE) exhibition 2006 at Nanyang Technological University (NTU) and have more than 40 users tried our system. It is demonstrated that our system is adequate for real time image capturing. Our system can be used or applied for applications such as medical imaging, video surveillance, etc.

  19. A proposed intracortical visual prosthesis image processing system.

    PubMed

    Srivastava, N R; Troyk, P

    2005-01-01

    It has been a goal of neuroprosthesis researchers to develop a system, which could provide artifical vision to a large population of individuals with blindness. It has been demonstrated by earlier researches that stimulating the visual cortex area electrically can evoke spatial visual percepts, i.e. phosphenes. The goal of visual cortex prosthesis is to stimulate the visual cortex area and generate a visual perception in real time to restore vision. Even though the normal working of the visual system is not been completely understood, the existing knowledge has inspired research groups to develop strategies to develop visual cortex prosthesis which can help blind patients in their daily activities. A major limitation in this work is the development of an image proceessing system for converting an electronic image, as captured by a camera, into a real-time data stream for stimulation of the implanted electrodes. This paper proposes a system, which will capture the image using a camera and use a dedicated hardware real time image processor to deliver electrical pulses to intracortical electrodes. This system has to be flexible enough to adapt to individual patients and to various strategies of image reconstruction. Here we consider a preliminary architecture for this system.

  20. Imaging the small animal cardiovascular system in real-time with multispectral optoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Taruttis, Adrian; Herzog, Eva; Razansky, Daniel; Ntziachristos, Vasilis

    2011-03-01

    Multispectral Optoacoustic Tomography (MSOT) is an emerging technique for high resolution macroscopic imaging with optical and molecular contrast. We present cardiovascular imaging results from a multi-element real-time MSOT system recently developed for studies on small animals. Anatomical features relevant to cardiovascular disease, such as the carotid arteries, the aorta and the heart, are imaged in mice. The system's fast acquisition time, in tens of microseconds, allows images free of motion artifacts from heartbeat and respiration. Additionally, we present in-vivo detection of optical imaging agents, gold nanorods, at high spatial and temporal resolution, paving the way for molecular imaging applications.

  1. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    NASA Astrophysics Data System (ADS)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  2. Real time diffuse reflectance polarisation spectroscopy imaging to evaluate skin microcirculation

    NASA Astrophysics Data System (ADS)

    O'Doherty, Jim; Henricson, Joakim; Nilsson, Gert E.; Anderson, Chris; Leahy, Martin J.

    2007-07-01

    This article describes the theoretical development and design of a real-time microcirculation imaging system, an extension from a previously technology developed by our group. The technology utilises polarisation spectroscopy, a technique used in order to selectively gate photons returning from various compartments of human skin tissue, namely from the superficial layers of the epidermis, and the deeper backscattered light from the dermal matrix. A consumer-end digital camcorder captures colour data with three individual CCDs, and a custom designed light source consisting of a 24 LED ring light provides broadband illumination over the 400 nm - 700 nm wavelength region. Theory developed leads to an image processing algorithm, the output of which scales linearly with increasing red blood cell (RBC) concentration. Processed images are displayed online in real-time at a rate of 25 frames s -1, at a frame size of 256 x 256 pixels, and is limited only by computer RAM memory and processing speed. General demonstrations of the technique in vivo display several advantages over similar technology.

  3. Real-time image processing for non-contact monitoring of dynamic displacements using smartphone technologies

    NASA Astrophysics Data System (ADS)

    Min, Jae-Hong; Gelo, Nikolas J.; Jo, Hongki

    2016-04-01

    The newly developed smartphone application, named RINO, in this study allows measuring absolute dynamic displacements and processing them in real time using state-of-the-art smartphone technologies, such as high-performance graphics processing unit (GPU), in addition to already powerful CPU and memories, embedded high-speed/ resolution camera, and open-source computer vision libraries. A carefully designed color-patterned target and user-adjustable crop filter enable accurate and fast image processing, allowing up to 240fps for complete displacement calculation and real-time display. The performances of the developed smartphone application are experimentally validated, showing comparable accuracy with those of conventional laser displacement sensor.

  4. Real-time Three-dimensional Echocardiography: From Diagnosis to Intervention.

    PubMed

    Orvalho, João S

    2017-09-01

    Echocardiography is one of the most important diagnostic tools in veterinary cardiology, and one of the greatest recent developments is real-time three-dimensional imaging. Real-time three-dimensional echocardiography is a new ultrasonography modality that provides comprehensive views of the cardiac valves and congenital heart defects. The main advantages of this technique, particularly real-time three-dimensional transesophageal echocardiography, are the ability to visualize the catheters, and balloons or other devices, and the ability to image the structure that is undergoing intervention with unprecedented quality. This technique may become one of the main choices for the guidance of interventional cardiology procedures. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Moving Raman spectroscopy into real-time, online diagnosis and detection of precancer and cancer in vivo in the upper GI during clinical endoscopic examination

    NASA Astrophysics Data System (ADS)

    Huang, Zhiwei; Bergholt, Mads Sylvest; Zheng, Wei; Ho, Khek Yu; Yeoh, Khay Guan; Teh, Ming; So, Jimmy Bok Yan; Shabbir, Asim

    2013-03-01

    A rapid image-guided Raman endoscopy system integrated with on-line diagnostic scheme is developed for in vivo Raman tissue diagnosis (optical biopsy) in the upper GI during clinical gastrointestinal endoscopy under multimodal wide-field imaging guidance. The real-time Raman endoscopy technique was tested prospectively on new gastric patients (n=4) and could identify dysplasia in vivo with sensitivity of 81.5% (22/27) and specificity of 87.9% (29/33). This study realizes for the first time the novel image-guided Raman endoscopy as a screening tool for real-time, online diagnosis of gastric cancer and precancer in vivo at endoscopy.

  6. Back-to-back optical coherence tomography-ultrasound probe for co-registered three-dimensional intravascular imaging with real-time display

    NASA Astrophysics Data System (ADS)

    Li, Jiawen; Ma, Teng; Jing, Joseph; Zhang, Jun; Patel, Pranav M.; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping

    2014-03-01

    We have developed a novel integrated optical coherence tomography (OCT)-intravascular ultrasound (IVUS) probe, with a 1.5 mm-long rigid-part and 0.9 mm outer diameter, for real-time intracoronary imaging of atherosclerotic plaques and guiding interventional procedures. By placing the OCT ball lens and IVUS 45MHz single element transducer back-to-back at the same axial position, this probe can provide automatically co-registered, co-axial OCT-IVUS imaging. To demonstrate its capability, 3D OCT-IVUS imaging of a pig's coronary artery in real-time displayed in polar coordinates, as well as images of two major types of advanced plaques in human cadaver coronary segments, was obtained using this probe and our upgraded system. Histology validation is also presented.

  7. Imaging multicellular specimens with real-time optimized tiling light-sheet selective plane illumination microscopy

    PubMed Central

    Fu, Qinyi; Martin, Benjamin L.; Matus, David Q.; Gao, Liang

    2016-01-01

    Despite the progress made in selective plane illumination microscopy, high-resolution 3D live imaging of multicellular specimens remains challenging. Tiling light-sheet selective plane illumination microscopy (TLS-SPIM) with real-time light-sheet optimization was developed to respond to the challenge. It improves the 3D imaging ability of SPIM in resolving complex structures and optimizes SPIM live imaging performance by using a real-time adjustable tiling light sheet and creating a flexible compromise between spatial and temporal resolution. We demonstrate the 3D live imaging ability of TLS-SPIM by imaging cellular and subcellular behaviours in live C. elegans and zebrafish embryos, and show how TLS-SPIM can facilitate cell biology research in multicellular specimens by studying left-right symmetry breaking behaviour of C. elegans embryos. PMID:27004937

  8. Real-time high dynamic range laser scanning microscopy

    PubMed Central

    Vinegoni, C.; Leon Swisher, C.; Fumene Feruglio, P.; Giedt, R. J.; Rousso, D. L.; Stapleton, S.; Weissleder, R.

    2016-01-01

    In conventional confocal/multiphoton fluorescence microscopy, images are typically acquired under ideal settings and after extensive optimization of parameters for a given structure or feature, often resulting in information loss from other image attributes. To overcome the problem of selective data display, we developed a new method that extends the imaging dynamic range in optical microscopy and improves the signal-to-noise ratio. Here we demonstrate how real-time and sequential high dynamic range microscopy facilitates automated three-dimensional neural segmentation. We address reconstruction and segmentation performance on samples with different size, anatomy and complexity. Finally, in vivo real-time high dynamic range imaging is also demonstrated, making the technique particularly relevant for longitudinal imaging in the presence of physiological motion and/or for quantification of in vivo fast tracer kinetics during functional imaging. PMID:27032979

  9. Real-Time Imaging of Plant Cell Wall Structure at Nanometer Scale, with Respect to Cellulase Accessibility and Degradation Kinetics (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, S. Y.

    Presentation on real-time imaging of plant cell wall structure at nanometer scale. Objectives are to develop tools to measure biomass at the nanometer scale; elucidate the molecular bases of biomass deconstruction; and identify factors that affect the conversion efficiency of biomass-to-biofuels.

  10. High Resolution Near Real Time Image Processing and Support for MSSS Modernization

    NASA Astrophysics Data System (ADS)

    Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.

    2012-09-01

    This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.

  11. Computer-aided diagnosis of colorectal polyp histology by using a real-time image recognition system and narrow-band imaging magnifying colonoscopy.

    PubMed

    Kominami, Yoko; Yoshida, Shigeto; Tanaka, Shinji; Sanomura, Yoji; Hirakawa, Tsubasa; Raytchev, Bisser; Tamaki, Toru; Koide, Tetsusi; Kaneda, Kazufumi; Chayama, Kazuaki

    2016-03-01

    It is necessary to establish cost-effective examinations and treatments for diminutive colorectal tumors that consider the treatment risk and surveillance interval after treatment. The Preservation and Incorporation of Valuable Endoscopic Innovations (PIVI) committee of the American Society for Gastrointestinal Endoscopy published a statement recommending the establishment of endoscopic techniques that practice the resect and discard strategy. The aims of this study were to evaluate whether our newly developed real-time image recognition system can predict histologic diagnoses of colorectal lesions depicted on narrow-band imaging and to satisfy some problems with the PIVI recommendations. We enrolled 41 patients who had undergone endoscopic resection of 118 colorectal lesions (45 nonneoplastic lesions and 73 neoplastic lesions). We compared the results of real-time image recognition system analysis with that of narrow-band imaging diagnosis and evaluated the correlation between image analysis and the pathological results. Concordance between the endoscopic diagnosis and diagnosis by a real-time image recognition system with a support vector machine output value was 97.5% (115/118). Accuracy between the histologic findings of diminutive colorectal lesions (polyps) and diagnosis by a real-time image recognition system with a support vector machine output value was 93.2% (sensitivity, 93.0%; specificity, 93.3%; positive predictive value (PPV), 93.0%; and negative predictive value, 93.3%). Although further investigation is necessary to establish our computer-aided diagnosis system, this real-time image recognition system may satisfy the PIVI recommendations and be useful for predicting the histology of colorectal tumors. Copyright © 2016 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  12. Image processing applications: From particle physics to society

    NASA Astrophysics Data System (ADS)

    Sotiropoulou, C.-L.; Luciano, P.; Gkaitatzis, S.; Citraro, S.; Giannetti, P.; Dell'Orso, M.

    2017-01-01

    We present an embedded system for extremely efficient real-time pattern recognition execution, enabling technological advancements with both scientific and social impact. It is a compact, fast, low consumption processing unit (PU) based on a combination of Field Programmable Gate Arrays (FPGAs) and the full custom associative memory chip. The PU has been developed for real time tracking in particle physics experiments, but delivers flexible features for potential application in a wide range of fields. It has been proposed to be used in accelerated pattern matching execution for Magnetic Resonance Fingerprinting (biomedical applications), in real time detection of space debris trails in astronomical images (space applications) and in brain emulation for image processing (cognitive image processing). We illustrate the potentiality of the PU for the new applications.

  13. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    PubMed

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  14. Real-Time On-Board Processing Validation of MSPI Ground Camera Images

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.

    2010-01-01

    The Earth Sciences Decadal Survey identifies a multiangle, multispectral, high-accuracy polarization imager as one requirement for the Aerosol-Cloud-Ecosystem (ACE) mission. JPL has been developing a Multiangle SpectroPolarimetric Imager (MSPI) as a candidate to fill this need. A key technology development needed for MSPI is on-board signal processing to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's Advanced Information Systems Technology (AIST) Program, JPL is solving the real-time data processing requirements to demonstrate, for the first time, how signal data at 95 Mbytes/sec over 16-channels for each of the 9 multiangle cameras in the spaceborne instrument can be reduced on-board to 0.45 Mbytes/sec. This will produce the intensity and polarization data needed to characterize aerosol and cloud microphysical properties. Using the Xilinx Virtex-5 FPGA including PowerPC440 processors we have implemented a least squares fitting algorithm that extracts intensity and polarimetric parameters in real-time, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information.

  15. Real time 3D structural and Doppler OCT imaging on graphics processing units

    NASA Astrophysics Data System (ADS)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  16. Real-time dynamic display of registered 4D cardiac MR and ultrasound images using a GPU

    NASA Astrophysics Data System (ADS)

    Zhang, Q.; Huang, X.; Eagleson, R.; Guiraudon, G.; Peters, T. M.

    2007-03-01

    In minimally invasive image-guided surgical interventions, different imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), and real-time three-dimensional (3D) ultrasound (US), can provide complementary, multi-spectral image information. Multimodality dynamic image registration is a well-established approach that permits real-time diagnostic information to be enhanced by placing lower-quality real-time images within a high quality anatomical context. For the guidance of cardiac procedures, it would be valuable to register dynamic MRI or CT with intraoperative US. However, in practice, either the high computational cost prohibits such real-time visualization of volumetric multimodal images in a real-world medical environment, or else the resulting image quality is not satisfactory for accurate guidance during the intervention. Modern graphics processing units (GPUs) provide the programmability, parallelism and increased computational precision to begin to address this problem. In this work, we first outline our research on dynamic 3D cardiac MR and US image acquisition, real-time dual-modality registration and US tracking. Then we describe image processing and optimization techniques for 4D (3D + time) cardiac image real-time rendering. We also present our multimodality 4D medical image visualization engine, which directly runs on a GPU in real-time by exploiting the advantages of the graphics hardware. In addition, techniques such as multiple transfer functions for different imaging modalities, dynamic texture binding, advanced texture sampling and multimodality image compositing are employed to facilitate the real-time display and manipulation of the registered dual-modality dynamic 3D MR and US cardiac datasets.

  17. Real-Time Visualization Tool Integrating STEREO, ACE, SOHO and the SDO

    NASA Astrophysics Data System (ADS)

    Schroeder, P. C.; Luhmann, J. G.; Marchant, W.

    2011-12-01

    The STEREO/IMPACT team has developed a new web-based visualization tool for near real-time data from the STEREO instruments, ACE and SOHO as well as relevant models of solar activity. This site integrates images, solar energetic particle, solar wind plasma and magnetic field measurements in an intuitive way using near real-time products from NOAA and other sources to give an overview of recent space weather events. This site enhances the browse tools already available at UC Berkeley, UCLA and Caltech which allow users to visualize similar data from the start of the STEREO mission. Our new near real-time tool utilizes publicly available real-time data products from a number of missions and instruments, including SOHO LASCO C2 images from the SOHO team's NASA site, SDO AIA images from the SDO team's NASA site, STEREO IMPACT SEP data plots and ACE EPAM data plots from the NOAA Space Weather Prediction Center and STEREO spacecraft positions from the STEREO Science Center.

  18. A generic FPGA-based detector readout and real-time image processing board

    NASA Astrophysics Data System (ADS)

    Sarpotdar, Mayuresh; Mathew, Joice; Safonova, Margarita; Murthy, Jayant

    2016-07-01

    For space-based astronomical observations, it is important to have a mechanism to capture the digital output from the standard detector for further on-board analysis and storage. We have developed a generic (application- wise) field-programmable gate array (FPGA) board to interface with an image sensor, a method to generate the clocks required to read the image data from the sensor, and a real-time image processor system (on-chip) which can be used for various image processing tasks. The FPGA board is applied as the image processor board in the Lunar Ultraviolet Cosmic Imager (LUCI) and a star sensor (StarSense) - instruments developed by our group. In this paper, we discuss the various design considerations for this board and its applications in the future balloon and possible space flights.

  19. The implementation of CMOS sensors within a real time digital mammography intelligent imaging system: The I-ImaS System

    NASA Astrophysics Data System (ADS)

    Esbrand, C.; Royle, G.; Griffiths, J.; Speller, R.

    2009-07-01

    The integration of technology with healthcare has undoubtedly propelled the medical imaging sector well into the twenty first century. The concept of digital imaging introduced during the 1970s has since paved the way for established imaging techniques where digital mammography, phase contrast imaging and CT imaging are just a few examples. This paper presents a prototype intelligent digital mammography system designed and developed by a European consortium. The final system, the I-ImaS system, utilises CMOS monolithic active pixel sensor (MAPS) technology promoting on-chip data processing, enabling the acts of data processing and image acquisition to be achieved simultaneously; consequently, statistical analysis of tissue is achievable in real-time for the purpose of x-ray beam modulation via a feedback mechanism during the image acquisition procedure. The imager implements a dual array of twenty 520 pixel × 40 pixel CMOS MAPS sensing devices with a 32μm pixel size, each individually coupled to a 100μm thick thallium doped structured CsI scintillator. This paper presents the first intelligent images of real breast tissue obtained from the prototype system of real excised breast tissue where the x-ray exposure was modulated via the statistical information extracted from the breast tissue itself. Conventional images were experimentally acquired where the statistical analysis of the data was done off-line, resulting in the production of simulated real-time intelligently optimised images. The results obtained indicate real-time image optimisation using the statistical information extracted from the breast as a means of a feedback mechanisms is beneficial and foreseeable in the near future.

  20. Research on intelligent scenic security early warning platform based on high resolution image: real scene linkage and real-time LBS

    NASA Astrophysics Data System (ADS)

    Li, Baishou; Huang, Yu; Lan, Guangquan; Li, Tingting; Lu, Ting; Yao, Mingxing; Luo, Yuandan; Li, Boxiang; Qian, Yongyou; Gao, Yujiu

    2015-12-01

    This paper design and implement security monitor system within a scenic spot for tourists, the scenic spot staff can be automatic real time for visitors to perception and monitoring, and visitors can also know about themselves location in the scenic, real-time and obtain the 3D imaging conditions of scenic area. Through early warning can realize "parent-child relation", preventing the old man and child lost and wandering. Research results to the further development of virtual reality to provide effective security early warning platform of the theoretical basis and practical reference.

  1. Development of Simultaneous Beta-and-Coincidence-Gamma Imager for Plant Imaging Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tai, Yuan-Chuan

    2016-09-30

    The goal of this project is to develop a novel imaging system that can simultaneously acquire beta and coincidence gamma images of positron sources in thin objects such as leaves of plants. This hybrid imager can be used to measure carbon assimilation in plants quantitatively and in real-time after C-11 labeled carbon-dioxide is administered. A better understanding of carbon assimilation, particularly under the increasingly elevated atmospheric CO 2 level, is extremely critical for plant scientists who study food crop and biofuel production. Phase 1 of this project is focused on the technology development with 3 specific aims: (1) develop amore » hybrid detector that can detect beta and gamma rays simultaneously; (2) develop an imaging system that can differentiate these two types of radiation and acquire beta and coincidence gamma images in real-time; (3) develop techniques to quantify radiotracer distribution using beta and gamma images. Phase 2 of this project is to apply technologies developed in phase 1 to study plants using positron-emitting radionuclide such as 11C to study carbon assimilation in biofuel plants.« less

  2. The near real time image navigation of pictures returned by Voyager 2 at Neptune

    NASA Technical Reports Server (NTRS)

    Underwood, Ian M.; Bachman, Nathaniel J.; Taber, William L.; Wang, Tseng-Chan; Acton, Charles H.

    1990-01-01

    The development of a process for performing image navigation in near real time is described. The process was used to accurately determine the camera pointing for pictures returned by the Voyager 2 spacecraft at Neptune Encounter. Image navigation improves knowledge of the pointing of an imaging instrument at a particular epoch by correlating the spacecraft-relative locations of target bodies in inertial space with the locations of their images in a picture taken at that epoch. More than 8,500 pictures returned by Voyager 2 at Neptune were processed in near real time. The results were used in several applications, including improving pointing knowledge for nonimaging instruments ('C-smithing'), making 'Neptune, the Movie', and providing immediate access to geometrical quantities similar to those traditionally supplied in the Supplementary Experiment Data Record.

  3. Development of and Improved Magneto-Optic/Eddy-Current Imager

    DOT National Transportation Integrated Search

    1997-04-01

    Magneto-optic/eddy-current imaging technology has been developed and approved for inspection of cracks in aging aircraft. This relatively new nondestructive test method gives the inspector the ability to quickly generate real-time eddy-current images...

  4. Comparison of turbulence mitigation algorithms

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  5. Sparsity-based image monitoring of crystal size distribution during crystallization

    NASA Astrophysics Data System (ADS)

    Liu, Tao; Huo, Yan; Ma, Cai Y.; Wang, Xue Z.

    2017-07-01

    To facilitate monitoring crystal size distribution (CSD) during a crystallization process by using an in-situ imaging system, a sparsity-based image analysis method is proposed for real-time implementation. To cope with image degradation arising from in-situ measurement subject to particle motion, solution turbulence, and uneven illumination background in the crystallizer, sparse representation of a real-time captured crystal image is developed based on using an in-situ image dictionary established in advance, such that the noise components in the captured image can be efficiently removed. Subsequently, the edges of a crystal shape in a captured image are determined in terms of the salience information defined from the denoised crystal images. These edges are used to derive a blur kernel for reconstruction of a denoised image. A non-blind deconvolution algorithm is given for the real-time reconstruction. Consequently, image segmentation can be easily performed for evaluation of CSD. The crystal image dictionary and blur kernels are timely updated in terms of the imaging conditions to improve the restoration efficiency. An experimental study on the cooling crystallization of α-type L-glutamic acid (LGA) is shown to demonstrate the effectiveness and merit of the proposed method.

  6. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    NASA Astrophysics Data System (ADS)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss the design details of a protable 3D display based on the integration of a diffractive optical element with a commercial off-the-shelf AMLCD. The diffractive optic contains several hundred thousand partial-pixel gratings and the AMLCD modulates the light diffracted by the gratings.

  7. Real-Time Nanoscopy by Using Blinking Enhanced Quantum Dots

    PubMed Central

    Watanabe, Tomonobu M.; Fukui, Shingo; Jin, Takashi; Fujii, Fumihiko; Yanagida, Toshio

    2010-01-01

    Superresolution optical microscopy (nanoscopy) is of current interest in many biological fields. Superresolution optical fluctuation imaging, which utilizes higher-order cumulant of fluorescence temporal fluctuations, is an excellent method for nanoscopy, as it requires neither complicated optics nor illuminations. However, it does need an impractical number of images for real-time observation. Here, we achieved real-time nanoscopy by modifying superresolution optical fluctuation imaging and enhancing the fluctuation of quantum dots. Our developed quantum dots have higher blinking than commercially available ones. The fluctuation of the blinking improved the resolution when using a variance calculation for each pixel instead of a cumulant calculation. This enabled us to obtain microscopic images with 90-nm and 80-ms spatial-temporal resolution by using a conventional fluorescence microscope without any optics or devices. PMID:20923631

  8. Image quality in real-time teleultrasound of infant hip exam over low-bandwidth internet links: a transatlantic feasibility study.

    PubMed

    Martinov, Dobrivoje; Popov, Veljko; Ignjatov, Zoran; Harris, Robert D

    2013-04-01

    Evolution of communication systems, especially internet-based technologies, has probably affected Radiology more than any other medical specialty. Tremendous increase in internet bandwidth has enabled a true revolution in image transmission and easy remote viewing of the static images and real-time video stream. Previous reports of real-time telesonography, such as the ones developed for emergency situations and humanitarian work, rely on high compressions of images utilized by remote sonologist to guide and supervise the unexperienced examiner. We believe that remote sonology could be also utilized in teleultrasound exam of infant hip. We tested feasibility of a low-cost teleultrasound system for infant hip and performed data analysis on the transmitted and original images. Transmission of data was accomplished with Remote Ultrasound (RU), a software package specifically designed for teleultrasound transmission through limited internet bandwidth. While image analysis of image pairs revealed statistically significant loss of information, panel evaluation failed to recognize any clinical difference between the original saved and transmitted still images.

  9. Real-time cardiovascular magnetic resonance at 1.5 T using balanced SSFP and 40 ms resolution

    PubMed Central

    2013-01-01

    Background While cardiovascular magnetic resonance (CMR) commonly employs ECG-synchronized cine acquisitions with balanced steady-state free precession (SSFP) contrast at 1.5 T, recent developments at 3 T demonstrate significant potential for T1-weighted real-time imaging at high spatiotemporal resolution using undersampled radial FLASH. The purpose of this work was to combine both ideas and to evaluate a corresponding real-time CMR method at 1.5 T with SSFP contrast. Methods Radial gradient-echo sequences with fully balanced gradients and at least 15-fold undersampling were implemented on two CMR systems with different gradient performance. Image reconstruction by regularized nonlinear inversion (NLINV) was performed offline and resulted in real-time SSFP CMR images at a nominal resolution of 1.8 mm and with acquisition times of 40 ms. Results Studies of healthy subjects demonstrated technical feasibility in terms of robustness and general image quality. Clinical applicability with access to quantitative evaluations (e.g., ejection fraction) was confirmed by preliminary applications to 27 patients with typical indications for CMR including arrhythmias and abnormal wall motion. Real-time image quality was slightly lower than for cine SSFP recordings, but considered diagnostic in all cases. Conclusions Extending conventional cine approaches, real-time radial SSFP CMR with NLINV reconstruction provides access to individual cardiac cycles and allows for studies of patients with irregular heartbeat. PMID:24028285

  10. High resolution, wide field of view, real time 340GHz 3D imaging radar for security screening

    NASA Astrophysics Data System (ADS)

    Robertson, Duncan A.; Macfarlane, David G.; Hunter, Robert I.; Cassidy, Scott L.; Llombart, Nuria; Gandini, Erio; Bryllert, Tomas; Ferndahl, Mattias; Lindström, Hannu; Tenhunen, Jussi; Vasama, Hannu; Huopana, Jouni; Selkälä, Timo; Vuotikka, Antti-Jussi

    2017-05-01

    The EU FP7 project CONSORTIS (Concealed Object Stand-Off Real-Time Imaging for Security) is developing a demonstrator system for next generation airport security screening which will combine passive and active submillimeter wave imaging sensors. We report on the development of the 340 GHz 3D imaging radar which achieves high volumetric resolution over a wide field of view with high dynamic range and a high frame rate. A sparse array of 16 radar transceivers is coupled with high speed mechanical beam scanning to achieve a field of view of 1 x 1 x 1 m3 and a 10 Hz frame rate.

  11. WE-EF-303-05: Development and Commissioning of Real-Time Imaging Function for Respiratory-Gated Spot-Scanning Proton Beam Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miyamoto, N; Takao, S; Matsuura, T

    2015-06-15

    Purpose: To realize real-time-image gated proton beam therapy (RGPT) for treating mobile tumors. Methods: The rotating gantry of spot scanning proton beam therapy has been designed to equip two x-ray fluoroscopy devices that enable real-time imaging of the internal fiducial markers during respiration. Three-dimensional position of the fiducial marker located near the tumor can be calculated from the fluoroscopic images obtained from orthogonal directions and therapeutic beam is gated only when the fiducial marker is within the predefined gating window. Image acquisition rate can be selected from discrete value ranging from 0.1 Hz to 30 Hz. In order to confirmmore » the effectiveness of RGPT and apply it clinically, clinical commissioning was conducted. Commissioning tests were categorized to main three parts including geometric accuracy, temporal accuracy and dosimetric evaluation. Results: Developed real-time imaging function has been installed and its basic performances have been confirmed. In the evaluation of geometric accuracy, coincidence of three-dimensional treatment room coordinate system and imaging coordinate system was confirmed to be less than 1 mm. Fiducial markers (gold sphere and coil) were able to be tracked in simulated clinical condition using an anthropomorphic chest phantom. In the evaluation of temporal accuracy, latency from image acquisition to gate on/off signal was about 60 msec in typical case. In dosimetric evaluation, treatment beam characteristics including beam irradiation position and dose output were stable in gated irradiation. Homogeneity indices to the mobile target were 0.99 (static), 0.89 (w/o gating, motion is parallel to direction of scan), 0.75 (w/o gating, perpendicular), 0.98 (w/ gating, parallel) and 0.93 (w/ gating, perpendicular). Dose homogeneity to the mobile target can be maintained in RGPT. Conclusion: Real-time imaging function utilizing x-ray fluoroscopy has been developed and commissioned successfully in order to realize RGPT. Funding Support: This research was partially supported by Japan Society for the Promotion of Science (JSPS) through the FIRST Program. Conflict of Interest: Prof. Shirato has research fund from Hitachi Ltd, Mitsubishi Heavy Industries Ltd and Shimadzu Corporation.« less

  12. Real-time Flare Detection in Ground-Based Hα Imaging at Kanzelhöhe Observatory

    NASA Astrophysics Data System (ADS)

    Pötzi, W.; Veronig, A. M.; Riegler, G.; Amerstorfer, U.; Pock, T.; Temmer, M.; Polanec, W.; Baumgartner, D. J.

    2015-03-01

    Kanzelhöhe Observatory (KSO) regularly performs high-cadence full-disk imaging of the solar chromosphere in the Hα and Ca ii K spectral lines as well as in the solar photosphere in white light. In the frame of ESA's (European Space Agency) Space Situational Awareness (SSA) program, a new system for real-time Hα data provision and automatic flare detection was developed at KSO. The data and events detected are published in near real-time at ESA's SSA Space Weather portal (http://swe.ssa.esa.int/web/guest/kso-federated). In this article, we describe the Hα instrument, the image-recognition algorithms we developed, and the implementation into the KSO Hα observing system. We also present the evaluation results of the real-time data provision and flare detection for a period of five months. The Hα data provision worked in 99.96 % of the images, with a mean time lag of four seconds between image recording and online provision. Within the given criteria for the automatic image-recognition system (at least three Hα images are needed for a positive detection), all flares with an area ≥ 50 micro-hemispheres that were located within 60° of the solar center and occurred during the KSO observing times were detected, a number of 87 events in total. The automatically determined flare importance and brightness classes were correct in ˜ 85 %. The mean flare positions in heliographic longitude and latitude were correct to within ˜ 1°. The median of the absolute differences for the flare start and peak times from the automatic detections in comparison with the official NOAA (and KSO) visual flare reports were 3 min (1 min).

  13. Infrared imagery acquisition process supporting simulation and real image training

    NASA Astrophysics Data System (ADS)

    O'Connor, John

    2012-05-01

    The increasing use of infrared sensors requires development of advanced infrared training and simulation tools to meet current Warfighter needs. In order to prepare the force, a challenge exists for training and simulation images to be both realistic and consistent with each other to be effective and avoid negative training. The US Army Night Vision and Electronic Sensors Directorate has corrected this deficiency by developing and implementing infrared image collection methods that meet the needs of both real image trainers and real-time simulations. The author presents innovative methods for collection of high-fidelity digital infrared images and the associated equipment and environmental standards. The collected images are the foundation for US Army, and USMC Recognition of Combat Vehicles (ROC-V) real image combat ID training and also support simulations including the Night Vision Image Generator and Synthetic Environment Core. The characteristics, consistency, and quality of these images have contributed to the success of these and other programs. To date, this method has been employed to generate signature sets for over 350 vehicles. The needs of future physics-based simulations will also be met by this data. NVESD's ROC-V image database will support the development of training and simulation capabilities as Warfighter needs evolve.

  14. Red fluorescent probes for real-time imaging of the cell cycle by dynamic monitoring of the nucleolus and chromosome.

    PubMed

    Wang, Kang-Nan; Chao, Xi-Juan; Liu, Bing; Zhou, Dan-Jie; He, Liang; Zheng, Xiao-Hui; Cao, Qian; Tan, Cai-Ping; Zhang, Chen; Mao, Zong-Wan

    2018-03-08

    Two cationic molecular rotors, 1 and 2, capable of real-time cell-cycle imaging by specifically dynamic monitoring of nucleolus and chromosome changes were developed. A further study shows that fluorescence enhancements in the nucleolus and chromosome are attributed to a combination effect of interaction with nucleic acid and high condensation of the nucleolus and chromosome.

  15. Real Time Computer Graphics From Body Motion

    NASA Astrophysics Data System (ADS)

    Fisher, Scott; Marion, Ann

    1983-10-01

    This paper focuses on the recent emergence and development of real, time, computer-aided body tracking technologies and their use in combination with various computer graphics imaging techniques. The convergence of these, technologies in our research results, in an interactive display environment. in which multipde, representations of a given body motion can be displayed in real time. Specific reference, to entertainment applications is described in the development of a real time, interactive stage set in which dancers can 'draw' with their bodies as they move, through the space. of the stage or manipulate virtual elements of the set with their gestures.

  16. All-weather ice information system for Alaskan arctic coastal shipping

    NASA Technical Reports Server (NTRS)

    Gedney, R. T.; Jirberg, R. J.; Schertler, R. J.; Mueller, R. A.; Chase, T. L.; Kramarchuk, I.; Nagy, L. A.; Hanlon, R. A.; Mark, H.

    1977-01-01

    A near real-time ice information system designed to aid arctic coast shipping along the Alaskan North Slope is described. The system utilizes a X-band Side Looking Airborne Radar (SLAR) mounted aboard a U.S. Coast Guard HC-130B aircraft. Radar mapping procedures showing the type, areal distribution and concentration of ice cover were developed. In order to guide vessel operational movements, near real-time SLAR image data were transmitted directly from the SLAR aircraft to Barrow, Alaska and the U.S. Coast Guard icebreaker Glacier. In addition, SLAR image data were transmitted in real time to Cleveland, Ohio via the NOAA-GOES Satellite. Radar images developed in Cleveland were subsequently facsimile transmitted to the U.S. Navy's Fleet Weather Facility in Suitland, Maryland for use in ice forecasting and also as a demonstration back to Barrow via the Communications Technology Satellite.

  17. Real-time "x-ray vision" for healthcare simulation: an interactive projective overlay system to enhance intubation training and other procedural training.

    PubMed

    Samosky, Joseph T; Baillargeon, Emma; Bregman, Russell; Brown, Andrew; Chaya, Amy; Enders, Leah; Nelson, Douglas A; Robinson, Evan; Sukits, Alison L; Weaver, Robert A

    2011-01-01

    We have developed a prototype of a real-time, interactive projective overlay (IPO) system that creates augmented reality display of a medical procedure directly on the surface of a full-body mannequin human simulator. These images approximate the appearance of both anatomic structures and instrument activity occurring within the body. The key innovation of the current work is sensing the position and motion of an actual device (such as an endotracheal tube) inserted into the mannequin and using the sensed position to control projected video images portraying the internal appearance of the same devices and relevant anatomic structures. The images are projected in correct registration onto the surface of the simulated body. As an initial practical prototype to test this technique we have developed a system permitting real-time visualization of the intra-airway position of an endotracheal tube during simulated intubation training.

  18. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  19. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    PubMed

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  20. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redler, G; Cifter, G; Templeton, A

    2016-06-15

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lungmore » tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated patient images demonstrate the clinical utility of scatter imaging for real-time tumor tracking during lung SBRT.« less

  1. Solid-State Multi-Sensor Array System for Real Time Imaging of Magnetic Fields and Ferrous Objects

    NASA Astrophysics Data System (ADS)

    Benitez, D.; Gaydecki, P.; Quek, S.; Torres, V.

    2008-02-01

    In this paper the development of a solid-state sensors based system for real-time imaging of magnetic fields and ferrous objects is described. The system comprises 1089 magneto inductive solid state sensors arranged in a 2D array matrix of 33×33 files and columns, equally spaced in order to cover an approximate area of 300 by 300 mm. The sensor array is located within a large current-carrying coil. Data is sampled from the sensors by several DSP controlling units and finally streamed to a host computer via a USB 2.0 interface and the image generated and displayed at a rate of 20 frames per minute. The development of the instrumentation has been complemented by extensive numerical modeling of field distribution patterns using boundary element methods. The system was originally intended for deployment in the non-destructive evaluation (NDE) of reinforced concrete. Nevertheless, the system is not only capable of producing real-time, live video images of the metal target embedded within any opaque medium, it also allows the real-time visualization and determination of the magnetic field distribution emitted by either permanent magnets or geometries carrying current. Although this system was initially developed for the NDE arena, it could also have many potential applications in many other fields, including medicine, security, manufacturing, quality assurance and design involving magnetic fields.

  2. Adaptive optics with pupil tracking for high resolution retinal imaging

    PubMed Central

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-01-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics. PMID:22312577

  3. Adaptive optics with pupil tracking for high resolution retinal imaging.

    PubMed

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  4. Volumetric Real-Time Imaging Using a CMUT Ring Array

    PubMed Central

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods—flash, classic phased array (CPA), and synthetic phased array (SPA)—were used in the study. For SPA imaging, two techniques to improve the image quality—Hadamard coding and aperture weighting—were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming. PMID:22718870

  5. Volumetric real-time imaging using a CMUT ring array.

    PubMed

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  6. Spherical visual system for real-time virtual reality and surveillance

    NASA Astrophysics Data System (ADS)

    Chen, Su-Shing

    1998-12-01

    A spherical visual system has been developed for full field, web-based surveillance, virtual reality, and roundtable video conference. The hardware is a CycloVision parabolic lens mounted on a video camera. The software was developed at the University of Missouri-Columbia. The mathematical model is developed by Su-Shing Chen and Michael Penna in the 1980s. The parabolic image, capturing the full (360 degrees) hemispherical field (except the north pole) of view is transformed into the spherical model of Chen and Penna. In the spherical model, images are invariant under the rotation group and are easily mapped to the image plane tangent to any point on the sphere. The projected image is exactly what the usual camera produces at that angle. Thus a real-time full spherical field video camera is developed by using two pieces of parabolic lenses.

  7. Real-Time Spaceborne Synthetic Aperture Radar Float-Point Imaging System Using Optimized Mapping Methodology and a Multi-Node Parallel Accelerating Technique

    PubMed Central

    Li, Bingyi; Chen, Liang; Yu, Wenyue; Xie, Yizhuang; Bian, Mingming; Zhang, Qingjun; Pang, Long

    2018-01-01

    With the development of satellite load technology and very large-scale integrated (VLSI) circuit technology, on-board real-time synthetic aperture radar (SAR) imaging systems have facilitated rapid response to disasters. A key goal of the on-board SAR imaging system design is to achieve high real-time processing performance under severe size, weight, and power consumption constraints. This paper presents a multi-node prototype system for real-time SAR imaging processing. We decompose the commonly used chirp scaling (CS) SAR imaging algorithm into two parts according to the computing features. The linearization and logic-memory optimum allocation methods are adopted to realize the nonlinear part in a reconfigurable structure, and the two-part bandwidth balance method is used to realize the linear part. Thus, float-point SAR imaging processing can be integrated into a single Field Programmable Gate Array (FPGA) chip instead of relying on distributed technologies. A single-processing node requires 10.6 s and consumes 17 W to focus on 25-km swath width, 5-m resolution stripmap SAR raw data with a granularity of 16,384 × 16,384. The design methodology of the multi-FPGA parallel accelerating system under the real-time principle is introduced. As a proof of concept, a prototype with four processing nodes and one master node is implemented using a Xilinx xc6vlx315t FPGA. The weight and volume of one single machine are 10 kg and 32 cm × 24 cm × 20 cm, respectively, and the power consumption is under 100 W. The real-time performance of the proposed design is demonstrated on Chinese Gaofen-3 stripmap continuous imaging. PMID:29495637

  8. Ultrasound Picture Archiving And Communication Systems

    NASA Astrophysics Data System (ADS)

    Koestner, Ken; Hottinger, C. F.

    1982-01-01

    The ideal ultrasonic image communication and storage system must be flexible in order to optimize speed and minimize storage requirements. Various ultrasonic imaging modalities are quite different in data volume and speed requirements. Static imaging, for example B-Scanning, involves acquisition of a large amount of data that is averaged or accumulated in a desired manner. The image is then frozen in image memory before transfer and storage. Images are commonly a 512 x 512 point array, each point 6 bits deep. Transfer of such an image over a serial line at 9600 baud would require about three minutes. Faster transfer times are possible; for example, we have developed a parallel image transfer system using direct memory access (DMA) that reduces the time to 16 seconds. Data in this format requires 256K bytes for storage. Data compression can be utilized to reduce these requirements. Real-time imaging has much more stringent requirements for speed and storage. The amount of actual data per frame in real-time imaging is reduced due to physical limitations on ultrasound. For example, 100 scan lines (480 points long, 6 bits deep) can be acquired during a frame at a 30 per second rate. In order to transmit and save this data at a real-time rate requires a transfer rate of 8.6 Megabaud. A real-time archiving system would be complicated by the necessity of specialized hardware to interpolate between scan lines and perform desirable greyscale manipulation on recall. Image archiving for cardiology and radiology would require data transfer at this high rate to preserve temporal (cardiology) and spatial (radiology) information.

  9. Real-time processing of interferograms for monitoring protein crystal growth on the Space Station

    NASA Technical Reports Server (NTRS)

    Choudry, A.; Dupuis, N.

    1988-01-01

    The possibility of using microscopic interferometric techniques to monitor the growth of protein crystals on the Space Station is studied. Digital image processing techniques are used to develop a system for the real-time analysis of microscopic interferograms of nucleation sites during protein crystal growth. Features of the optical setup and the image processing system are discussed and experimental results are presented.

  10. Additive Manufacturing Infrared Inspection

    NASA Technical Reports Server (NTRS)

    Gaddy, Darrell; Nettles, Mindy

    2015-01-01

    The Additive Manufacturing Infrared Inspection Task started the development of a real-time dimensional inspection technique and digital quality record for the additive manufacturing process using infrared camera imaging and processing techniques. This project will benefit additive manufacturing by providing real-time inspection of internal geometry that is not currently possible and reduce the time and cost of additive manufactured parts with automated real-time dimensional inspections which deletes post-production inspections.

  11. A Flexible Annular-Array Imaging Platform for Micro-Ultrasound

    PubMed Central

    Qiu, Weibao; Yu, Yanyan; Chabok, Hamid Reza; Liu, Cheng; Tsang, Fu Keung; Zhou, Qifa; Shung, K. Kirk; Zheng, Hairong; Sun, Lei

    2013-01-01

    Micro-ultrasound is an invaluable imaging tool for many clinical and preclinical applications requiring high resolution (approximately several tens of micrometers). Imaging systems for micro-ultrasound, including single-element imaging systems and linear-array imaging systems, have been developed extensively in recent years. Single-element systems are cheaper, but linear-array systems give much better image quality at a higher expense. Annular-array-based systems provide a third alternative, striking a balance between image quality and expense. This paper presents the development of a novel programmable and real-time annular-array imaging platform for micro-ultrasound. It supports multi-channel dynamic beamforming techniques for large-depth-of-field imaging. The major image processing algorithms were achieved by a novel field-programmable gate array technology for high speed and flexibility. Real-time imaging was achieved by fast processing algorithms and high-speed data transfer interface. The platform utilizes a printed circuit board scheme incorporating state-of-the-art electronics for compactness and cost effectiveness. Extensive tests including hardware, algorithms, wire phantom, and tissue mimicking phantom measurements were conducted to demonstrate good performance of the platform. The calculated contrast-to-noise ratio (CNR) of the tissue phantom measurements were higher than 1.2 in the range of 3.8 to 8.7 mm imaging depth. The platform supported more than 25 images per second for real-time image acquisition. The depth-of-field had about 2.5-fold improvement compared to single-element transducer imaging. PMID:23287923

  12. Micromachined silicon parallel acoustic delay lines as time-delayed ultrasound detector array for real-time photoacoustic tomography

    NASA Astrophysics Data System (ADS)

    Cho, Y.; Chang, C.-C.; Wang, L. V.; Zou, J.

    2016-02-01

    This paper reports the development of a new 16-channel parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT). The PADLs were directly fabricated from single-crystalline silicon substrates using deep reactive ion etching. Compared with other acoustic delay lines (e.g., optical fibers), the micromachined silicon PADLs offer higher acoustic transmission efficiency, smaller form factor, easier assembly, and mass production capability. To demonstrate its real-time photoacoustic imaging capability, the silicon PADL array was interfaced with one single-element ultrasonic transducer followed by one channel of data acquisition electronics to receive 16 channels of photoacoustic signals simultaneously. A PAT image of an optically-absorbing target embedded in an optically-scattering phantom was reconstructed, which matched well with the actual size of the imaged target. Because the silicon PADL array allows a signal-to-channel reduction ratio of 16:1, it could significantly simplify the design and construction of ultrasonic receivers for real-time PAT.

  13. Real-time high-velocity resolution color Doppler OCT

    NASA Astrophysics Data System (ADS)

    Westphal, Volker; Yazdanfar, Siavash; Rollins, Andrew M.; Izatt, Joseph A.

    2001-05-01

    Color Doppler optical coherence tomography (CDOCT), also called Optical Doppler Tomography) is a noninvasive optical imaging technique, which allows for micron-scale physiological flow mapping simultaneous with morphological OCT imaging. Current systems for real-time endoscopic optical coherence tomography (EOCT) would be enhanced by the capability to visualize sub-surface blood flow for applications in early cancer diagnosis and the management of bleeding ulcers. Unfortunately, previous implementations of CDOCT have either been sufficiently computationally expensive (employing Fourier or Hilbert transform techniques) to rule out real-time imaging of flow, or have been restricted to imaging of excessively high flow velocities when used in real time. We have developed a novel Doppler OCT signal-processing strategy capable of imaging physiological flow rates in real time. This strategy employs cross-correlation processing of sequential A-scans in an EOCT image, as opposed to autocorrelation processing as described previously. To measure Doppler shifts in the kHz range using this technique, it was necessary to stabilize the EOCT interferometer center frequency, eliminate parasitic phase noise, and to construct a digital cross correlation unit able to correlate signals of megahertz bandwidth by a fixed lag of up to a few ms. The performance of the color Doppler OCT system was demonstrated in a flow phantom, demonstrating a minimum detectable flow velocity of ~0.8 mm/s at a data acquisition rate of 8 images/second (with 480 A-scans/image) using a handheld probe. Dynamic flow as well as using it freehanded was shown. Flow was also detectable in a phantom in combination with a clinical usable endoscopic probe.

  14. Real time thermal imaging for analysis and control of crystal growth by the Czochralski technique

    NASA Technical Reports Server (NTRS)

    Wargo, M. J.; Witt, A. F.

    1992-01-01

    A real time thermal imaging system with temperature resolution better than +/- 0.5 C and spatial resolution of better than 0.5 mm has been developed. It has been applied to the analysis of melt surface thermal field distributions in both Czochralski and liquid encapsulated Czochralski growth configurations. The sensor can provide single/multiple point thermal information; a multi-pixel averaging algorithm has been developed which permits localized, low noise sensing and display of optical intensity variations at any location in the hot zone as a function of time. Temperature distributions are measured by extraction of data along a user selectable linear pixel array and are simultaneously displayed, as a graphic overlay, on the thermal image.

  15. Miniature real-time intraoperative forward-imaging optical coherence tomography probe

    PubMed Central

    Joos, Karen M.; Shen, Jin-Hui

    2013-01-01

    Optical coherence tomography (OCT) has a tremendous global impact upon the ability to diagnose, treat, and monitor eye diseases. A miniature 25-gauge forward-imaging OCT probe with a disposable tip was developed for real-time intraoperative ocular imaging of posterior pole and peripheral structures to improve vitreoretinal surgery. The scanning range was 2 mm when the probe tip was held 3-4 mm from the tissue surface. The axial resolution was 4-6 µm and the lateral resolution was 25-35 µm. The probe was used to image cellophane tape and multiple ocular structures. PMID:24009997

  16. Real-time photoacoustic and ultrasound dual-modality imaging system facilitated with graphics processing unit and code parallel optimization.

    PubMed

    Yuan, Jie; Xu, Guan; Yu, Yao; Zhou, Yu; Carson, Paul L; Wang, Xueding; Liu, Xiaojun

    2013-08-01

    Photoacoustic tomography (PAT) offers structural and functional imaging of living biological tissue with highly sensitive optical absorption contrast and excellent spatial resolution comparable to medical ultrasound (US) imaging. We report the development of a fully integrated PAT and US dual-modality imaging system, which performs signal scanning, image reconstruction, and display for both photoacoustic (PA) and US imaging all in a truly real-time manner. The back-projection (BP) algorithm for PA image reconstruction is optimized to reduce the computational cost and facilitate parallel computation on a state of the art graphics processing unit (GPU) card. For the first time, PAT and US imaging of the same object can be conducted simultaneously and continuously, at a real-time frame rate, presently limited by the laser repetition rate of 10 Hz. Noninvasive PAT and US imaging of human peripheral joints in vivo were achieved, demonstrating the satisfactory image quality realized with this system. Another experiment, simultaneous PAT and US imaging of contrast agent flowing through an artificial vessel, was conducted to verify the performance of this system for imaging fast biological events. The GPU-based image reconstruction software code for this dual-modality system is open source and available for download from http://sourceforge.net/projects/patrealtime.

  17. Novel snapshot hyperspectral imager for fluorescence imaging

    NASA Astrophysics Data System (ADS)

    Chandler, Lynn; Chandler, Andrea; Periasamy, Ammasi

    2018-02-01

    Hyperspectral imaging has emerged as a new technique for the identification and classification of biological tissue1. Benefitting recent developments in sensor technology, the new class of hyperspectral imagers can capture entire hypercubes with single shot operation and it shows great potential for real-time imaging in biomedical sciences. This paper explores the use of a SnapShot imager in fluorescence imaging via microscope for the very first time. Utilizing the latest imaging sensor, the Snapshot imager is both compact and attachable via C-mount to any commercially available light microscope. Using this setup, fluorescence hypercubes of several cells were generated, containing both spatial and spectral information. The fluorescence images were acquired with one shot operation for all the emission range from visible to near infrared (VIS-IR). The paper will present the hypercubes obtained images from example tissues (475-630nm). This study demonstrates the potential of application in cell biology or biomedical applications for real time monitoring.

  18. Switched Antenna Array Tile for Real-Time Microwave Imaging Aperture

    DTIC Science & Technology

    2016-06-26

    Switched Antenna Array Tile for Real -Time Microwave Imaging Aperture William F. Moulder, Janusz J. Majewski, Charles M. Coldwell, James D. Krieger...Fast Imaging Algorithm 10mm 250mm Switched Array Tile Fig. 1. Diagram of real -time imaging array, with fabricated antenna tile. except for antenna...formed. IV. CONCLUSIONS A switched array tile to be used in a real time imaging aperture has been presented. Design and realization of the tile were

  19. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    NASA Astrophysics Data System (ADS)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  20. Real-time feedback control of twin-screw wet granulation based on image analysis.

    PubMed

    Madarász, Lajos; Nagy, Zsombor Kristóf; Hoffer, István; Szabó, Barnabás; Csontos, István; Pataki, Hajnalka; Démuth, Balázs; Szabó, Bence; Csorba, Kristóf; Marosi, György

    2018-06-04

    The present paper reports the first dynamic image analysis-based feedback control of continuous twin-screw wet granulation process. Granulation of the blend of lactose and starch was selected as a model process. The size and size distribution of the obtained particles were successfully monitored by a process camera coupled with an image analysis software developed by the authors. The validation of the developed system showed that the particle size analysis tool can determine the size of the granules with an error of less than 5 µm. The next step was to implement real-time feedback control of the process by controlling the liquid feeding rate of the pump through a PC, based on the real-time determined particle size results. After the establishment of the feedback control, the system could correct different real-life disturbances, creating a Process Analytically Controlled Technology (PACT), which guarantees the real-time monitoring and controlling of the quality of the granules. In the event of changes or bad tendencies in the particle size, the system can automatically compensate the effect of disturbances, ensuring proper product quality. This kind of quality assurance approach is especially important in the case of continuous pharmaceutical technologies. Copyright © 2018 Elsevier B.V. All rights reserved.

  1. Optoacoustic imaging in five dimensions

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. L.; Gottschalk, Sven; Fehm, Thomas F.; Razansky, Daniel

    2015-03-01

    We report on an optoacoustic imaging system capable of acquiring volumetric multispectral optoacoustic data in real time. The system is based on simultaneous acquisition of optoacoustic signals from 256 different tomographic projections by means of a spherical matrix array. Thereby, volumetric reconstructions can be done at high frame rate, only limited by the pulse repetition rate of the laser. The developed tomographic approach presents important advantages over previously reported systems that use scanning for attaining volumetric optoacoustic data. First, dynamic processes, such as the biodistribution of optical biomarkers, can be monitored in the entire volume of interest. Second, out-of-plane and motion artifacts that could degrade the image quality when imaging living specimens can be avoided. Finally, real-time 3D performance can obviously save time required for experimental and clinical observations. The feasibility of optoacoustic imaging in five dimensions, i.e. real time acquisition of volumetric datasets at multiple wavelengths, is reported. In this way, volumetric images of spectrally resolved chromophores are rendered in real time, thus offering an unparallel imaging performance among the current bio-imaging modalities. This performance is subsequently showcased by video-rate visualization of in vivo hemodynamic changes in mouse brain and handheld visualization of blood oxygenation in deep human vessels. The newly discovered capacities open new prospects for translating the optoacoustic technology into highly performing imaging modality for biomedical research and clinical practice with multiple applications envisioned, from cardiovascular and cancer diagnostics to neuroimaging and ophthalmology.

  2. Time multiplexing for increased FOV and resolution in virtual reality

    NASA Astrophysics Data System (ADS)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  3. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  4. HVS: an image-based approach for constructing virtual environments

    NASA Astrophysics Data System (ADS)

    Zhang, Maojun; Zhong, Li; Sun, Lifeng; Li, Yunhao

    1998-09-01

    Virtual Reality Systems can construct virtual environment which provide an interactive walkthrough experience. Traditionally, walkthrough is performed by modeling and rendering 3D computer graphics in real-time. Despite the rapid advance of computer graphics technique, the rendering engine usually places a limit on scene complexity and rendering quality. This paper presents a approach which uses the real-world image or synthesized image to comprise a virtual environment. The real-world image or synthesized image can be recorded by camera, or synthesized by off-line multispectral image processing for Landsat TM (Thematic Mapper) Imagery and SPOT HRV imagery. They are digitally warped on-the-fly to simulate walking forward/backward, to left/right and 360-degree watching around. We have developed a system HVS (Hyper Video System) based on these principles. HVS improves upon QuickTime VR and Surround Video in the walking forward/backward.

  5. Development of real-time extensometer based on image processing

    NASA Astrophysics Data System (ADS)

    Adinanta, H.; Puranto, P.; Suryadi

    2017-04-01

    An extensometer system was developed by using high definition web camera as main sensor to track object position. The developed system applied digital image processing techniques. The image processing was used to measure the change of object position. The position measurement was done in real-time so that the system can directly showed the actual position in both x and y-axis. In this research, the relation between pixel and object position changes had been characterized. The system was tested by moving the target in a range of 20 cm in interval of 1 mm. To verify the long run performance, the stability and linearity of continuous measurements on both x and y-axis, this measurement had been conducted for 83 hours. The results show that this image processing-based extensometer had both good stability and linearity.

  6. Imaging of near-Earth space plasma.

    PubMed

    Mitchell, Cathryn N

    2002-12-15

    This paper describes the technique of imaging the ionosphere using tomographic principles. It reports on current developments and speculates on the future of this research area. Recent developments in computing and ionospheric measurement, together with the sharing of data via the internet, now allow us to envisage a time when high-resolution, real-time images and 'movies' of the ionosphere will be possible for radio communications planning. There is great potential to use such images for improving our understanding of the physical processes controlling the behaviour of the ionosphere. While real-time images and movies of the electron concentration are now almost possible, forecasting of ionospheric morphology is still in its early stages. It has become clear that the ionosphere cannot be considered as a system in isolation, and consequently new research projects to link together models of the solar-terrestrial system, including the Sun, solar wind, magnetosphere, ionosphere and thermosphere, are now being proposed. The prospect is now on the horizon of assimilating data from the entire solar-terrestrial system to produce a real-time computer model and 'space weather' forecast. The role of tomography in imaging beyond the ionosphere to include the whole near-Earth space-plasma realm is yet to be realized, and provides a challenging prospect for the future. Finally, exciting possibilities exist in applying such methods to image the atmospheres and ionospheres of other planets.

  7. Real-time Visualization and Quantification of Retrograde Cardioplegia Delivery using Near Infrared Fluorescent Imaging

    PubMed Central

    Rangaraj, Aravind T.; Ghanta, Ravi K.; Umakanthan, Ramanan; Soltesz, Edward G.; Laurence, Rita G.; Fox, John; Cohn, Lawrence H.; Bolman, R. M.; Frangioni, John V.; Chen, Frederick Y.

    2009-01-01

    Background and Aim of the Study Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. Methods A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in 5 ex-vivo normal porcine hearts and in 5 ex-vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. Results The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed that retrograde cardioplegia primarily distributed to the left ventricle and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior left ventricle. This deficiency was compensated for with retrograde cardioplegia supplementation. Conclusions Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated. PMID:19016995

  8. Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation

    NASA Astrophysics Data System (ADS)

    Hakim, P. R.; Permala, R.; Jayani, A. P. S.

    2018-05-01

    LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.

  9. A real-time device for converting Doppler ultrasound audio signals into fluid flow velocity

    PubMed Central

    Hogeman, Cynthia S.; Koch, Dennis W.; Krishnan, Anandi; Momen, Afsana; Leuenberger, Urs A.

    2010-01-01

    A Doppler signal converter has been developed to facilitate cardiovascular and exercise physiology research. This device directly converts audio signals from a clinical Doppler ultrasound imaging system into a real-time analog signal that accurately represents blood flow velocity and is easily recorded by any standard data acquisition system. This real-time flow velocity signal, when simultaneously recorded with other physiological signals of interest, permits the observation of transient flow response to experimental interventions in a manner not possible when using standard Doppler imaging devices. This converted flow velocity signal also permits a more robust and less subjective analysis of data in a fraction of the time required by previous analytic methods. This signal converter provides this capability inexpensively and requires no modification of either the imaging or data acquisition system. PMID:20173048

  10. Machine vision for real time orbital operations

    NASA Technical Reports Server (NTRS)

    Vinz, Frank L.

    1988-01-01

    Machine vision for automation and robotic operation of Space Station era systems has the potential for increasing the efficiency of orbital servicing, repair, assembly and docking tasks. A machine vision research project is described in which a TV camera is used for inputing visual data to a computer so that image processing may be achieved for real time control of these orbital operations. A technique has resulted from this research which reduces computer memory requirements and greatly increases typical computational speed such that it has the potential for development into a real time orbital machine vision system. This technique is called AI BOSS (Analysis of Images by Box Scan and Syntax).

  11. Towards real time 2D to 3D registration for ultrasound-guided endoscopic and laparoscopic procedures.

    PubMed

    San José Estépar, Raúl; Westin, Carl-Fredrik; Vosburgh, Kirby G

    2009-11-01

    A method to register endoscopic and laparoscopic ultrasound (US) images in real time with pre-operative computed tomography (CT) data sets has been developed with the goal of improving diagnosis, biopsy guidance, and surgical interventions in the abdomen. The technique, which has the potential to operate in real time, is based on a new phase correlation technique: LEPART, which specifies the location of a plane in the CT data which best corresponds to the US image. Validation of the method was carried out using an US phantom with cyst regions and with retrospective analysis of data sets from animal model experiments. The phantom validation study shows that local translation displacements can be recovered for each US frame with a root mean squared error of 1.56 +/- 0.78 mm in less than 5 sec, using non-optimized algorithm implementations. A new method for multimodality (preoperative CT and intraoperative US endoscopic images) registration to guide endoscopic interventions was developed and found to be efficient using clinically realistic datasets. The algorithm is inherently capable of being implemented in a parallel computing system so that full real time operation appears likely.

  12. Real-time digital signal processing for live electro-optic imaging.

    PubMed

    Sasagawa, Kiyotaka; Kanno, Atsushi; Tsuchiya, Masahiro

    2009-08-31

    We present an imaging system that enables real-time magnitude and phase detection of modulated signals and its application to a Live Electro-optic Imaging (LEI) system, which realizes instantaneous visualization of RF electric fields. The real-time acquisition of magnitude and phase images of a modulated optical signal at 5 kHz is demonstrated by imaging with a Si-based high-speed CMOS image sensor and real-time signal processing with a digital signal processor. In the LEI system, RF electric fields are probed with light via an electro-optic crystal plate and downconverted to an intermediate frequency by parallel optical heterodyning, which can be detected with the image sensor. The artifacts caused by the optics and the image sensor characteristics are corrected by image processing. As examples, we demonstrate real-time visualization of electric fields from RF circuits.

  13. PixonVision real-time Deblurring Anisoplanaticism Corrector (DAC)

    NASA Astrophysics Data System (ADS)

    Hier, R. G.; Puetter, R. C.

    2007-09-01

    DigiVision, Inc. and PixonImaging LLC have teamed to develop a real-time Deblurring Anisoplanaticism Corrector (DAC) for the Army. The DAC measures the geometric image warp caused by anisoplanaticism and removes it to rectify and stabilize (dejitter) the incoming image. Each new geometrically corrected image field is combined into a running-average reference image. The image averager employs a higher-order filter that uses temporal bandpass information to help identify true motion of objects and thereby adaptively moderate the contribution of each new pixel to the reference image. This result is then passed to a real-time PixonVision video processor (see paper 6696-04 note, the DAC also first dehazes the incoming video) where additional blur from high-order seeing effects is removed, the image is spatially denoised, and contrast is adjusted in a spatially adaptive manner. We plan to implement the entire algorithm within a few large modern FPGAs on a circuit board for video use. Obvious applications are within the DOD, surveillance and intelligence, security and law enforcement communities. Prototype hardware is scheduled to be available in late 2008. To demonstrate the capabilities of the DAC, we present a software simulation of the algorithm applied to real atmosphere-corrupted video data collected by Sandia Labs.

  14. Development of Targeting UAVs Using Electric Helicopters and Yamaha RMAX

    DTIC Science & Technology

    2007-05-17

    including the QNX real - time operating system . The video overlay board is useful to display the onboard camera’s image with important information such as... real - time operating system . Fully utilizing the built-in multi-processing architecture with inter-process synchronization and communication

  15. Hyperspectral imaging for food processing automation

    NASA Astrophysics Data System (ADS)

    Park, Bosoon; Lawrence, Kurt C.; Windham, William R.; Smith, Doug P.; Feldner, Peggy W.

    2002-11-01

    This paper presents the research results that demonstrates hyperspectral imaging could be used effectively for detecting feces (from duodenum, ceca, and colon) and ingesta on the surface of poultry carcasses, and potential application for real-time, on-line processing of poultry for automatic safety inspection. The hyperspectral imaging system included a line scan camera with prism-grating-prism spectrograph, fiber optic line lighting, motorized lens control, and hyperspectral image processing software. Hyperspectral image processing algorithms, specifically band ratio of dual-wavelength (565/517) images and thresholding were effective on the identification of fecal and ingesta contamination of poultry carcasses. A multispectral imaging system including a common aperture camera with three optical trim filters (515.4 nm with 8.6- nm FWHM), 566.4 nm with 8.8-nm FWHM, and 631 nm with 10.2-nm FWHM), which were selected and validated by a hyperspectral imaging system, was developed for a real-time, on-line application. A total image processing time required to perform the current multispectral images captured by a common aperture camera was approximately 251 msec or 3.99 frames/sec. A preliminary test shows that the accuracy of real-time multispectral imaging system to detect feces and ingesta on corn/soybean fed poultry carcasses was 96%. However, many false positive spots that cause system errors were also detected.

  16. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation.

    PubMed

    Arujuna, Aruna V; Housden, R James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D; Razavi, Reza; Rhode, Kawal S

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures.

  17. Real-time magnetic resonance imaging of cardiac function and flow—recent progress

    PubMed Central

    Zhang, Shuo; Joseph, Arun A.; Voit, Dirk; Schaetz, Sebastian; Merboldt, Klaus-Dietmar; Unterberg-Buchwald, Christina; Hennemuth, Anja; Lotz, Joachim

    2014-01-01

    Cardiac structure, function and flow are most commonly studied by ultrasound, X-ray and magnetic resonance imaging (MRI) techniques. However, cardiovascular MRI is hitherto limited to electrocardiogram (ECG)-synchronized acquisitions and therefore often results in compromised quality for patients with arrhythmias or inabilities to comply with requested protocols—especially with breath-holding. Recent advances in the development of novel real-time MRI techniques now offer dynamic imaging of the heart and major vessels with high spatial and temporal resolution, so that examinations may be performed without the need for ECG synchronization and during free breathing. This article provides an overview of technical achievements, physiological validations, preliminary patient studies and translational aspects for a future clinical scenario of cardiovascular MRI in real time. PMID:25392819

  18. Two dimensional microcirculation mapping with real time spatial frequency domain imaging

    NASA Astrophysics Data System (ADS)

    Zheng, Yang; Chen, Xinlin; Lin, Weihao; Cao, Zili; Zhu, Xiuwei; Zeng, Bixin; Xu, M.

    2018-02-01

    We present a spatial frequency domain imaging (SFDI) study of local hemodynamics in the human finger cuticle of healthy volunteers performing paced breathing and the forearm of healthy young adults performing normal breathing with our recently developed Real Time Single Snapshot Multiple Frequency Demodulation - Spatial Frequency Domain Imaging (SSMD-SFDI) system. A two-layer model was used to map the concentrations of deoxy-, oxy-hemoglobin, melanin, epidermal thickness and scattering properties at the subsurface of the forearm and the finger cuticle. The oscillations of the concentrations of deoxy- and oxy-hemoglobin at the subsurface of the finger cuticle and forearm induced by paced breathing and normal breathing, respectively, were found to be close to out-of-phase, attributed to the dominance of the blood flow modulation by paced breathing or heartbeat. Our results suggest that the real time SFDI platform may serve as one effective imaging modality for microcirculation monitoring.

  19. Improved image guidance technique for minimally invasive mitral valve repair using real-time tracked 3D ultrasound

    NASA Astrophysics Data System (ADS)

    Rankin, Adam; Moore, John; Bainbridge, Daniel; Peters, Terry

    2016-03-01

    In the past ten years, numerous new surgical and interventional techniques have been developed for treating heart valve disease without the need for cardiopulmonary bypass. Heart valve repair is now being performed in a blood-filled environment, reinforcing the need for accurate and intuitive imaging techniques. Previous work has demonstrated how augmenting ultrasound with virtual representations of specific anatomical landmarks can greatly simplify interventional navigation challenges and increase patient safety. These techniques often complicate interventions by requiring additional steps taken to manually define and initialize virtual models. Furthermore, overlaying virtual elements into real-time image data can also obstruct the view of salient image information. To address these limitations, a system was developed that uses real-time volumetric ultrasound alongside magnetically tracked tools presented in an augmented virtuality environment to provide a streamlined navigation guidance platform. In phantom studies simulating a beating-heart navigation task, procedure duration and tool path metrics have achieved comparable performance to previous work in augmented virtuality techniques, and considerable improvement over standard of care ultrasound guidance.

  20. Validation of ALFIA: a platform for quantifying near-infrared fluorescent images of lymphatic propulsion in humans

    NASA Astrophysics Data System (ADS)

    Rasmussen, John C.; Bautista, Merrick; Tan, I.-Chih; Adams, Kristen E.; Aldrich, Melissa; Marshall, Milton V.; Fife, Caroline E.; Maus, Erik A.; Smith, Latisha A.; Zhang, Jingdan; Xiang, Xiaoyan; Zhou, Shaohua Kevin; Sevick-Muraca, Eva M.

    2011-02-01

    Recently, we demonstrated near-infrared (NIR) fluorescence imaging for quantifying real-time lymphatic propulsion in humans following intradermal injections of microdose amounts of indocyanine green. However computational methods for image analysis are underdeveloped, hindering the translation and clinical adaptation of NIR fluorescent lymphatic imaging. In our initial work we used ImageJ and custom MatLab programs to manually identify lymphatic vessels and individual propulsion events using the temporal transit of the fluorescent dye. In addition, we extracted the apparent velocities of contractile propagation and time periods between propulsion events. Extensive time and effort were required to analyze the 6-8 gigabytes of NIR fluorescent images obtained for each subject. To alleviate this bottleneck, we commenced development of ALFIA, an integrated software platform which will permit automated, near real-time analysis of lymphatic function using NIR fluorescent imaging. However, prior to automation, the base algorithms calculating the apparent velocity and period must be validated to verify that they produce results consistent with the proof-of-concept programs. To do this, both methods were used to analyze NIR fluorescent images of two subjects and the number of propulsive events identified, the average apparent velocities, and the average periods for each subject were compared. Paired Student's t-tests indicate that the differences between their average results are not significant. With the base algorithms validated, further development and automation of ALFIA can be realized, significantly reducing the amount of user interaction required, and potentially enabling the near real-time, clinical evaluation of NIR fluorescent lymphatic imaging.

  1. SU-G-JeP3-07: Real-Time Image Guided Radiation Therapy for Heterotopic Ossification in Patients After Hip Replacement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, A; Jiang, S; Timmerman, R

    Purpose: To demonstrate the feasibility of using CBCT in a real-time image guided radiation therapy (IGRT) for single fraction heterotopic ossification (HO) in patients after hip replacement. In this real-time procedure, all steps, from simulation, imaging, planning to treatment delivery, are performed at the treatment unit in one appointment time slot. This work promotes real-time treatment to create a paradigm shift in the single fraction radiation therapy. Methods: An integrated real-time IGRT for HO was developed and tested for radiation treatment of heterotopic ossification for patient after hip replacement. After CBCT images are acquired at the linac, and sent tomore » the treatment planning system, the physician determines the field and/or draws a block. Subsequently, a simple 2D AP/PA plan with prescription of 700 cGy is created on-the-fly for physician to review. Once the physician approves the plan, the patient is treated on the same simulation position. This real-time treatment requires the team of attending physician, physicist, therapists, and dosimetrist to work in harmony to achieve all the steps in a timely manner. Results: Ten patients have been treated with this real-time treatment, having the same beams arrangement treatment plan and prescription as our clinically regular CT-based 2D plans. The average time for these procedures are 52.9 ±10.7 minutes from the time patient entered the treatment room until s/he exited, and 37.7 ±8.6 minutes from starting CBCT until last beam delivered. Conclusion: The real-time IGRT for HO treatment has been tested and implemented to be a clinically accepted procedure. This one-time appointment greatly enhances the waiting time, especially when patients in high level of pain, and provides a convenient approach for the whole clinical staff. Other disease sites will be also tested with this new technology.« less

  2. Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance

    PubMed Central

    Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang

    2015-01-01

    We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249

  3. A noninvasive technique for real-time detection of bruises in apple surface based on machine vision

    NASA Astrophysics Data System (ADS)

    Zhao, Juan; Peng, Yankun; Dhakal, Sagar; Zhang, Leilei; Sasao, Akira

    2013-05-01

    Apple is one of the highly consumed fruit item in daily life. However, due to its high damage potential and massive influence on taste and export, the quality of apple has to be detected before it reaches the consumer's hand. This study was aimed to develop a hardware and software unit for real-time detection of apple bruises based on machine vision technology. The hardware unit consisted of a light shield installed two monochrome cameras at different angles, LED light source to illuminate the sample, and sensors at the entrance of box to signal the positioning of sample. Graphical Users Interface (GUI) was developed in VS2010 platform to control the overall hardware and display the image processing result. The hardware-software system was developed to acquire the images of 3 samples from each camera and display the image processing result in real time basis. An image processing algorithm was developed in Opencv and C++ platform. The software is able to control the hardware system to classify the apple into two grades based on presence/absence of surface bruises with the size of 5mm. The experimental result is promising and the system with further modification can be applicable for industrial production in near future.

  4. Novel techniques of real-time blood flow and functional mapping: technical note.

    PubMed

    Kamada, Kyousuke; Ogawa, Hiroshi; Saito, Masato; Tamura, Yukie; Anei, Ryogo; Kapeller, Christoph; Hayashi, Hideaki; Prueckl, Robert; Guger, Christoph

    2014-01-01

    There are two main approaches to intraoperative monitoring in neurosurgery. One approach is related to fluorescent phenomena and the other is related to oscillatory neuronal activity. We developed novel techniques to visualize blood flow (BF) conditions in real time, based on indocyanine green videography (ICG-VG) and the electrophysiological phenomenon of high gamma activity (HGA). We investigated the use of ICG-VG in four patients with moyamoya disease and two with arteriovenous malformation (AVM), and we investigated the use of real-time HGA mapping in four patients with brain tumors who underwent lesion resection with awake craniotomy. Real-time data processing of ICG-VG was based on perfusion imaging, which generated parameters including arrival time (AT), mean transit time (MTT), and BF of brain surface vessels. During awake craniotomy, we analyzed the frequency components of brain oscillation and performed real-time HGA mapping to identify functional areas. Processed results were projected on a wireless monitor linked to the operating microscope. After revascularization for moyamoya disease, AT and BF were significantly shortened and increased, respectively, suggesting hyperperfusion. Real-time fusion images on the wireless monitor provided anatomical, BF, and functional information simultaneously, and allowed the resection of AVMs under the microscope. Real-time HGA mapping during awake craniotomy rapidly indicated the eloquent areas of motor and language function and significantly shortened the operation time. These novel techniques, which we introduced might improve the reliability of intraoperative monitoring and enable the development of rational and objective surgical strategies.

  5. Novel Techniques of Real-time Blood Flow and Functional Mapping: Technical Note

    PubMed Central

    KAMADA, Kyousuke; OGAWA, Hiroshi; SAITO, Masato; TAMURA, Yukie; ANEI, Ryogo; KAPELLER, Christoph; HAYASHI, Hideaki; PRUECKL, Robert; GUGER, Christoph

    2014-01-01

    There are two main approaches to intraoperative monitoring in neurosurgery. One approach is related to fluorescent phenomena and the other is related to oscillatory neuronal activity. We developed novel techniques to visualize blood flow (BF) conditions in real time, based on indocyanine green videography (ICG-VG) and the electrophysiological phenomenon of high gamma activity (HGA). We investigated the use of ICG-VG in four patients with moyamoya disease and two with arteriovenous malformation (AVM), and we investigated the use of real-time HGA mapping in four patients with brain tumors who underwent lesion resection with awake craniotomy. Real-time data processing of ICG-VG was based on perfusion imaging, which generated parameters including arrival time (AT), mean transit time (MTT), and BF of brain surface vessels. During awake craniotomy, we analyzed the frequency components of brain oscillation and performed real-time HGA mapping to identify functional areas. Processed results were projected on a wireless monitor linked to the operating microscope. After revascularization for moyamoya disease, AT and BF were significantly shortened and increased, respectively, suggesting hyperperfusion. Real-time fusion images on the wireless monitor provided anatomical, BF, and functional information simultaneously, and allowed the resection of AVMs under the microscope. Real-time HGA mapping during awake craniotomy rapidly indicated the eloquent areas of motor and language function and significantly shortened the operation time. These novel techniques, which we introduced might improve the reliability of intraoperative monitoring and enable the development of rational and objective surgical strategies. PMID:25263624

  6. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System.

    PubMed

    Butler, Mark C; Sullivan, Jack M

    2015-11-01

    To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies.

  7. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    NASA Astrophysics Data System (ADS)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  8. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  9. Integrated optical 3D digital imaging based on DSP scheme

    NASA Astrophysics Data System (ADS)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  10. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  11. Real time in vivo imaging and measurement of serine protease activity in the mouse hippocampus using a dedicated complementary metal-oxide semiconductor imaging device.

    PubMed

    Ng, David C; Tamura, Hideki; Tokuda, Takashi; Yamamoto, Akio; Matsuo, Masamichi; Nunoshita, Masahiro; Ishikawa, Yasuyuki; Shiosaka, Sadao; Ohta, Jun

    2006-09-30

    The aim of the present study is to demonstrate the application of complementary metal-oxide semiconductor (CMOS) imaging technology for studying the mouse brain. By using a dedicated CMOS image sensor, we have successfully imaged and measured brain serine protease activity in vivo, in real-time, and for an extended period of time. We have developed a biofluorescence imaging device by packaging the CMOS image sensor which enabled on-chip imaging configuration. In this configuration, no optics are required whereby an excitation filter is applied onto the sensor to replace the filter cube block found in conventional fluorescence microscopes. The fully packaged device measures 350 microm thick x 2.7 mm wide, consists of an array of 176 x 144 pixels, and is small enough for measurement inside a single hemisphere of the mouse brain, while still providing sufficient imaging resolution. In the experiment, intraperitoneally injected kainic acid induced upregulation of serine protease activity in the brain. These events were captured in real time by imaging and measuring the fluorescence from a fluorogenic substrate that detected this activity. The entire device, which weighs less than 1% of the body weight of the mouse, holds promise for studying freely moving animals.

  12. Ames Lab 101: Real-Time 3D Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Song

    2010-08-02

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  13. Ames Lab 101: Real-Time 3D Imaging

    ScienceCinema

    Zhang, Song

    2017-12-22

    Ames Laboratory scientist Song Zhang explains his real-time 3-D imaging technology. The technique can be used to create high-resolution, real-time, precise, 3-D images for use in healthcare, security, and entertainment applications.

  14. Feasibility study: real-time 3-D ultrasound imaging of the brain.

    PubMed

    Smith, Stephen W; Chu, Kengyeh; Idriss, Salim F; Ivancevich, Nikolas M; Light, Edward D; Wolf, Patrick D

    2004-10-01

    We tested the feasibility of real-time, 3-D ultrasound (US) imaging in the brain. The 3-D scanner uses a matrix phased-array transducer of 512 transmit channels and 256 receive channels operating at 2.5 MHz with a 15-mm diameter footprint. The real-time system scans a 65 degrees pyramid, producing up to 30 volumetric scans per second, and features up to five image planes as well as 3-D rendering, 3-D pulsed-wave and color Doppler. In a human subject, the real-time 3-D scans produced simultaneous transcranial horizontal (axial), coronal and sagittal image planes and real-time volume-rendered images of the gross anatomy of the brain. In a transcranial sheep model, we obtained real-time 3-D color flow Doppler scans and perfusion images using bolus injection of contrast agents into the internal carotid artery.

  15. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing

    PubMed Central

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-01-01

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate. PMID:27070606

  16. Accelerating Spaceborne SAR Imaging Using Multiple CPU/GPU Deep Collaborative Computing.

    PubMed

    Zhang, Fan; Li, Guojun; Li, Wei; Hu, Wei; Hu, Yuxin

    2016-04-07

    With the development of synthetic aperture radar (SAR) technologies in recent years, the huge amount of remote sensing data brings challenges for real-time imaging processing. Therefore, high performance computing (HPC) methods have been presented to accelerate SAR imaging, especially the GPU based methods. In the classical GPU based imaging algorithm, GPU is employed to accelerate image processing by massive parallel computing, and CPU is only used to perform the auxiliary work such as data input/output (IO). However, the computing capability of CPU is ignored and underestimated. In this work, a new deep collaborative SAR imaging method based on multiple CPU/GPU is proposed to achieve real-time SAR imaging. Through the proposed tasks partitioning and scheduling strategy, the whole image can be generated with deep collaborative multiple CPU/GPU computing. In the part of CPU parallel imaging, the advanced vector extension (AVX) method is firstly introduced into the multi-core CPU parallel method for higher efficiency. As for the GPU parallel imaging, not only the bottlenecks of memory limitation and frequent data transferring are broken, but also kinds of optimized strategies are applied, such as streaming, parallel pipeline and so on. Experimental results demonstrate that the deep CPU/GPU collaborative imaging method enhances the efficiency of SAR imaging on single-core CPU by 270 times and realizes the real-time imaging in that the imaging rate outperforms the raw data generation rate.

  17. Real-time reconstruction of three-dimensional brain surface MR image using new volume-surface rendering technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watanabe, T.; Momose, T.; Oku, S.

    It is essential to obtain realistic brain surface images, in which sulci and gyri are easily recognized, when examining the correlation between functional (PET or SPECT) and anatomical (MRI) brain studies. The volume rendering technique (VRT) is commonly employed to make three-dimensional (3D) brain surface images. This technique, however, takes considerable time to make only one 3D image. Therefore it has not been practical to make the brain surface images in arbitrary directions on a real-time basis using ordinary work stations or personal computers. The surface rendering technique (SRT), on the other hand, is much less computationally demanding, but themore » quality of resulting images is not satisfactory for our purpose. A new computer algorithm has been developed to make 3D brain surface MR images very quickly using a volume-surface rendering technique (VSRT), in which the quality of resulting images is comparable to that of VRT and computation time to SRT. In VSRT the process of volume rendering is done only once to the direction of the normal vector of each surface point, rather than each time a new view point is determined as in VRT. Subsequent reconstruction of the 3D image uses a similar algorithm to that of SRT. Thus we can obtain brain surface MR images of sufficient quality viewed from any direction on a real-time basis using an easily available personal computer (Macintosh Quadra 800). The calculation time to make a 3D image is less than 1 sec. in VSRT, while that is more than 15 sec. in the conventional VRT. The difference of resulting image quality between VSRT and VRT is almost imperceptible. In conclusion, our new technique for real-time reconstruction of 3D brain surface MR image is very useful and practical in the functional and anatomical correlation study.« less

  18. A Miniature Forward-imaging B-scan Optical Coherence Tomography Probe to Guide Real-time Laser Ablation

    PubMed Central

    Li, Zhuoyan; Shen, Jin H.; Kozub, John A.; Prasad, Ratna; Lu, Pengcheng; Joos, Karen M.

    2014-01-01

    Background and Objective Investigations have shown that pulsed lasers tuned to 6.1 μm in wavelength are capable of ablating ocular and neural tissue with minimal collateral damage. This study investigated whether a miniature B-scan forward-imaging optical coherence tomography (OCT) probe can be combined with the laser to provide real-time visual feedback during laser incisions. Study Design/Methods and Materials A miniature 25-gauge B-scan forward-imaging OCT probe was developed and combined with a 250 μm hollow-glass waveguide to permit delivery of 6.1 μm laser energy. A gelatin mixture and both porcine corneal and retinal tissues were simultaneously imaged and lased (6.1 μm, 10 Hz, 0.4-0.7 mJ) through air. The ablation studies were observed and recorded in real time. The crater dimensions were measured using OCT imaging software (Bioptigen, Durham, NC). Histological analysis was performed on the ocular tissues. Results The combined miniature forward-imaging OCT and mid-infrared laser-delivery probe successfully imaged real-time tissue ablation in gelatin, corneal tissue, and retinal tissue. Application of a constant number of 60 pulses at 0.5 mJ/pulse to the gelatin resulted in a mean crater depth of 123 ± 15 μm. For the corneal tissue, there was a significant correlation between the number of pulses used and depth of the lased hole (Pearson correlation coefficient = 0.82; P = 0.0002). Histological analysis of the cornea and retina tissues showed discrete holes with minimal thermal damage. Conclusions A combined miniature OCT and laser -delivery probe can monitor real-time tissue laser ablation. With additional testing and improvements, this novel instrument has the future possibility of effectively guiding surgeries by simultaneously imaging and ablating tissue. PMID:24648326

  19. Real-time intravital imaging of pH variation associated with osteoclast activity.

    PubMed

    Maeda, Hiroki; Kowada, Toshiyuki; Kikuta, Junichi; Furuya, Masayuki; Shirazaki, Mai; Mizukami, Shin; Ishii, Masaru; Kikuchi, Kazuya

    2016-08-01

    Intravital imaging by two-photon excitation microscopy (TPEM) has been widely used to visualize cell functions. However, small molecular probes (SMPs), commonly used for cell imaging, cannot be simply applied to intravital imaging because of the challenge of delivering them into target tissues, as well as their undesirable physicochemical properties for TPEM imaging. Here, we designed and developed a functional SMP with an active-targeting moiety, higher photostability, and a fluorescence switch and then imaged target cell activity by injecting the SMP into living mice. The combination of the rationally designed SMP with a fluorescent protein as a reporter of cell localization enabled quantitation of osteoclast activity and time-lapse imaging of its in vivo function associated with changes in cell deformation and membrane fluctuations. Real-time imaging revealed heterogenic behaviors of osteoclasts in vivo and provided insights into the mechanism of bone resorption.

  20. Real-time image mosaicing for medical applications.

    PubMed

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  1. Real-time three-dimensional digital image correlation for biomedical applications

    NASA Astrophysics Data System (ADS)

    Wu, Rong; Wu, Hua; Arola, Dwayne; Zhang, Dongsheng

    2016-10-01

    Digital image correlation (DIC) has been successfully applied for evaluating the mechanical behavior of biological tissues. A three-dimensional (3-D) DIC system has been developed and applied to examining the motion of bones in the human foot. To achieve accurate, real-time displacement measurements, an algorithm including matching between sequential images and image pairs has been developed. The system was used to monitor the movement of markers which were attached to a precisely motorized stage. The accuracy of the proposed technique for in-plane and out-of-plane measurements was found to be -0.25% and 1.17%, respectively. Two biomedical applications were presented. In the experiment involving the foot arch, a human cadaver lower leg and foot specimen were subjected to vertical compressive loads up to 700 N at a rate of 10 N/s and the 3-D motions of bones in the foot were monitored in real time. In the experiment involving distal tibio fibular syndesmosis, a human cadaver lower leg and foot specimen were subjected to a monotonic rotational torque up to 5 Nm at a speed of 5 deg per min and the relative displacements of the tibia and fibula were monitored in real time. Results showed that the system could reach a frequency of up to 16 Hz with 6 points measured simultaneously. This technique sheds new lights on measuring 3-D motion of bones in biomechanical studies.

  2. Ultrasound biomicroscopy in mouse cardiovascular development

    NASA Astrophysics Data System (ADS)

    Turnbull, Daniel H.

    2004-05-01

    The mouse is the preferred animal model for studying mammalian cardiovascular development and many human congenital heart diseases. Ultrasound biomicroscopy (UBM), utilizing high-frequency (40-50-MHz) ultrasound, is uniquely capable of providing in vivo, real-time microimaging and Doppler blood velocity measurements in mouse embryos and neonates. UBM analyses of normal and abnormal mouse cardiovascular function will be described to illustrate the power of this microimaging approach. In particular, real-time UBM images have been used to analyze dimensional changes in the mouse heart from embryonic to neonatal stages. UBM-Doppler has been used recently to examine the precise timing of onset of a functional circulation in early-stage mouse embryos, from the first detectable cardiac contractions. In other experiments, blood velocity waveforms have been analyzed to characterize the functional phenotype of mutant mouse embryos having defects in cardiac valve formation. Finally, UBM has been developed for real-time, in utero image-guided injection of mouse embryos, enabling cell transplantation and genetic gain-of-function experiments with transfected cells and retroviruses. In summary, UBM provides a unique and powerful approach for in vivo analysis and image-guided manipulation in normal and genetically engineered mice, over a wide range of embryonic to neonatal developmental stages.

  3. Far-field photostable optical nanoscopy (PHOTON) for real-time super-resolution single-molecular imaging of signaling pathways of single live cells

    NASA Astrophysics Data System (ADS)

    Huang, Tao; Browning, Lauren M.; Xu, Xiao-Hong Nancy

    2012-04-01

    Cellular signaling pathways play crucial roles in cellular functions and design of effective therapies. Unfortunately, study of cellular signaling pathways remains formidably challenging because sophisticated cascades are involved, and a few molecules are sufficient to trigger signaling responses of a single cell. Here we report the development of far-field photostable-optical-nanoscopy (PHOTON) with photostable single-molecule-nanoparticle-optical-biosensors (SMNOBS) for mapping dynamic cascades of apoptotic signaling pathways of single live cells in real-time at single-molecule (SM) and nanometer (nm) resolutions. We have quantitatively imaged single ligand molecules (tumor necrosis factor α, TNFα) and their binding kinetics with their receptors (TNFR1) on single live cells; tracked formation and internalization of their clusters and their initiation of intracellular signaling pathways in real-time; and studied apoptotic signaling dynamics and mechanisms of single live cells with sufficient temporal and spatial resolutions. This study provides new insights into complex real-time dynamic cascades and molecular mechanisms of apoptotic signaling pathways of single live cells. PHOTON provides superior imaging and sensing capabilities and SMNOBS offer unrivaled biocompatibility and photostability, which enable probing of signaling pathways of single live cells in real-time at SM and nm resolutions.Cellular signaling pathways play crucial roles in cellular functions and design of effective therapies. Unfortunately, study of cellular signaling pathways remains formidably challenging because sophisticated cascades are involved, and a few molecules are sufficient to trigger signaling responses of a single cell. Here we report the development of far-field photostable-optical-nanoscopy (PHOTON) with photostable single-molecule-nanoparticle-optical-biosensors (SMNOBS) for mapping dynamic cascades of apoptotic signaling pathways of single live cells in real-time at single-molecule (SM) and nanometer (nm) resolutions. We have quantitatively imaged single ligand molecules (tumor necrosis factor α, TNFα) and their binding kinetics with their receptors (TNFR1) on single live cells; tracked formation and internalization of their clusters and their initiation of intracellular signaling pathways in real-time; and studied apoptotic signaling dynamics and mechanisms of single live cells with sufficient temporal and spatial resolutions. This study provides new insights into complex real-time dynamic cascades and molecular mechanisms of apoptotic signaling pathways of single live cells. PHOTON provides superior imaging and sensing capabilities and SMNOBS offer unrivaled biocompatibility and photostability, which enable probing of signaling pathways of single live cells in real-time at SM and nm resolutions. Electronic supplementary information (ESI) available. See DOI: 10.1039/c2nr11739h

  4. An automatic detection method for the boiler pipe header based on real-time image acquisition

    NASA Astrophysics Data System (ADS)

    Long, Yi; Liu, YunLong; Qin, Yongliang; Yang, XiangWei; Li, DengKe; Shen, DingJie

    2017-06-01

    Generally, an endoscope is used to test the inner part of the thermal power plants boiler pipe header. However, since the endoscope hose manual operation, the length and angle of the inserted probe cannot be controlled. Additionally, it has a big blind spot observation subject to the length of the endoscope wire. To solve these problems, an automatic detection method for the boiler pipe header based on real-time image acquisition and simulation comparison techniques was proposed. The magnetic crawler with permanent magnet wheel could carry the real-time image acquisition device to complete the crawling work and collect the real-time scene image. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3-D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.

  5. Real-Time Two-Dimensional Magnetic Particle Imaging for Electromagnetic Navigation in Targeted Drug Delivery.

    PubMed

    Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon

    2017-09-07

    Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible.

  6. Real-Time Two-Dimensional Magnetic Particle Imaging for Electromagnetic Navigation in Targeted Drug Delivery

    PubMed Central

    Le, Tuan-Anh; Zhang, Xingming; Hoshiar, Ali Kafash; Yoon, Jungwon

    2017-01-01

    Magnetic nanoparticles (MNPs) are effective drug carriers. By using electromagnetic actuated systems, MNPs can be controlled noninvasively in a vascular network for targeted drug delivery (TDD). Although drugs can reach their target location through capturing schemes of MNPs by permanent magnets, drugs delivered to non-target regions can affect healthy tissues and cause undesirable side effects. Real-time monitoring of MNPs can improve the targeting efficiency of TDD systems. In this paper, a two-dimensional (2D) real-time monitoring scheme has been developed for an MNP guidance system. Resovist particles 45 to 65 nm in diameter (5 nm core) can be monitored in real-time (update rate = 2 Hz) in 2D. The proposed 2D monitoring system allows dynamic tracking of MNPs during TDD and renders magnetic particle imaging-based navigation more feasible. PMID:28880220

  7. Gamma-ray imaging system for real-time measurements in nuclear waste characterisation

    NASA Astrophysics Data System (ADS)

    Caballero, L.; Albiol Colomer, F.; Corbi Bellot, A.; Domingo-Pardo, C.; Leganés Nieto, J. L.; Agramunt Ros, J.; Contreras, P.; Monserrate, M.; Olleros Rodríguez, P.; Pérez Magán, D. L.

    2018-03-01

    A compact, portable and large field-of-view gamma camera that is able to identify, locate and quantify gamma-ray emitting radioisotopes in real-time has been developed. The device delivers spectroscopic and imaging capabilities that enable its use it in a variety of nuclear waste characterisation scenarios, such as radioactivity monitoring in nuclear power plants and more specifically for the decommissioning of nuclear facilities. The technical development of this apparatus and some examples of its application in field measurements are reported in this article. The performance of the presented gamma-camera is also benchmarked against other conventional techniques.

  8. A 3D THz image processing methodology for a fully integrated, semi-automatic and near real-time operational system

    NASA Astrophysics Data System (ADS)

    Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.

    2012-05-01

    The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.

  9. A Star Image Extractor for the Nano-JASMINE satellite

    NASA Astrophysics Data System (ADS)

    Yamauchi, M.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Yano, T.; Suganuma, M.; Yamada, Y.; Nakasuka, S.; Sako, N.

    2008-07-01

    We have developped a software of Star-Image-Extractor (SIE) which works as the on-board real-time image processor. It detects and extracts only the object data from raw image data. SIE has two functions: reducing image data and providing data for the satellite's high accuracy attitude control system.

  10. Preliminary results of real-time in-vitro electronic speckle pattern interferometry (ESPI) measurements in otolaryngology

    NASA Astrophysics Data System (ADS)

    Conerty, Michelle D.; Castracane, James; Cacace, Anthony T.; Parnes, Steven M.; Gardner, Glendon M.; Miller, Mitchell B.

    1995-05-01

    Electronic Speckle Pattern Interferometry (ESPI) is a nondestructive optical evaluation technique that is capable of determining surface and subsurface integrity through the quantitative evaluation of static or vibratory motion. By utilizing state of the art developments in the areas of lasers, fiber optics and solid state detector technology, this technique has become applicable in medical research and diagnostics. Based on initial support from NIDCD and continued support from InterScience, Inc., we have been developing a range of instruments for improved diagnostic evaluation in otolaryngological applications based on the technique of ESPI. These compact fiber optic instruments are capable of making real time interferometric measurements of the target tissue. Ongoing development of image post- processing software is currently capable of extracting the desired quantitative results from the acquired interferometric images. The goal of the research is to develop a fully automated system in which the image processing and quantification will be performed in hardware in near real-time. Subsurface details of both the tympanic membrane and vocal cord dynamics could speed the diagnosis of otosclerosis, laryngeal tumors, and aid in the evaluation of surgical procedures.

  11. Flexible real-time magnetic resonance imaging framework.

    PubMed

    Santos, Juan M; Wright, Graham A; Pauly, John M

    2004-01-01

    The extension of MR imaging to new applications has demonstrated the limitations of the architecture of current real-time systems. Traditional real-time implementations provide continuous acquisition of data and modification of basic sequence parameters on the fly. We have extended the concept of real-time MRI by designing a system that drives the examinations from a real-time localizer and then gets reconfigured for different imaging modes. Upon operator request or automatic feedback the system can immediately generate a new pulse sequence or change fundamental aspects of the acquisition such as gradient waveforms excitation pulses and scan planes. This framework has been implemented by connecting a data processing and control workstation to a conventional clinical scanner. Key components on the design of this framework are the data communication and control mechanisms, reconstruction algorithms optimized for real-time and adaptability, flexible user interface and extensible user interaction. In this paper we describe the various components that comprise this system. Some of the applications implemented in this framework include real-time catheter tracking embedded in high frame rate real-time imaging and immediate switching between real-time localizer and high-resolution volume imaging for coronary angiography applications.

  12. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2004-12-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  13. Real-time detection of natural objects using AM-coded spectral matching imager

    NASA Astrophysics Data System (ADS)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  14. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging.

    PubMed

    Tremsin, Anton S; Perrodin, Didier; Losko, Adrian S; Vogel, Sven C; Bourke, Mark A M; Bizarri, Gregory A; Bourret, Edith D

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  15. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    NASA Astrophysics Data System (ADS)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A. M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-04-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes.

  16. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    PubMed Central

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; Vogel, Sven C.; Bourke, Mark A.M.; Bizarri, Gregory A.; Bourret, Edith D.

    2017-01-01

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of “blind” processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production. This technique is widely applicable and is not limited to crystal growth processes. PMID:28425461

  17. SU-E-I-37: Low-Dose Real-Time Region-Of-Interest X-Ray Fluoroscopic Imaging with a GPU-Accelerated Spatially Different Bilateral Filtering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, H; Lee, J; Pua, R

    2014-06-01

    Purpose: The purpose of our study is to reduce imaging radiation dose while maintaining image quality of region of interest (ROI) in X-ray fluoroscopy. A low-dose real-time ROI fluoroscopic imaging technique which includes graphics-processing-unit- (GPU-) accelerated image processing for brightness compensation and noise filtering was developed in this study. Methods: In our ROI fluoroscopic imaging, a copper filter is placed in front of the X-ray tube. The filter contains a round aperture to reduce radiation dose to outside of the aperture. To equalize the brightness difference between inner and outer ROI regions, brightness compensation was performed by use of amore » simple weighting method that applies selectively to the inner ROI, the outer ROI, and the boundary zone. A bilateral filtering was applied to the images to reduce relatively high noise in the outer ROI images. To speed up the calculation of our technique for real-time application, the GPU-acceleration was applied to the image processing algorithm. We performed a dosimetric measurement using an ion-chamber dosimeter to evaluate the amount of radiation dose reduction. The reduction of calculation time compared to a CPU-only computation was also measured, and the assessment of image quality in terms of image noise and spatial resolution was conducted. Results: More than 80% of dose was reduced by use of the ROI filter. The reduction rate depended on the thickness of the filter and the size of ROI aperture. The image noise outside the ROI was remarkably reduced by the bilateral filtering technique. The computation time for processing each frame image was reduced from 3.43 seconds with single CPU to 9.85 milliseconds with GPU-acceleration. Conclusion: The proposed technique for X-ray fluoroscopy can substantially reduce imaging radiation dose to the patient while maintaining image quality particularly in the ROI region in real-time.« less

  18. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.

    PubMed

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-28

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.

  19. An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database

    PubMed Central

    Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang

    2016-01-01

    In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496

  20. Early warning by near-real time disturbance monitoring (Invited)

    NASA Astrophysics Data System (ADS)

    Verbesselt, J.; Zeileis, A.; Herold, M.

    2013-12-01

    Near real-time monitoring of ecosystem disturbances is critical for rapidly assessing and addressing impacts on carbon dynamics, biodiversity, and socio-ecological processes. Satellite remote sensing enables cost-effective and accurate monitoring at frequent time steps over large areas. Yet, generic methods to detect disturbances within newly captured satellite images are lacking. We propose a multi-purpose time-series-based disturbance detection approach that identifies and models stable historical variation to enable change detection within newly acquired data. Satellite image time series of vegetation greenness provide a global record of terrestrial vegetation productivity over the past decades. Here, we assess and demonstrate the method by applying it to (1) real-world satellite greenness image time series between February 2000 and July 2011 covering Somalia to detect drought-related vegetation disturbances (2) landsat image time series to detect forest disturbances. First, results illustrate that disturbances are successfully detected in near real-time while being robust to seasonality and noise. Second, major drought-related disturbance corresponding with most drought-stressed regions in Somalia are detected from mid-2010 onwards. Third, the method can be applied to landsat image time series having a lower temporal data density. Furthermore the method can analyze in-situ or satellite data time series of biophysical indicators from local to global scale since it is fast, does not depend on thresholds and does not require time series gap filling. While the data and methods used are appropriate for proof-of-concept development of global scale disturbance monitoring, specific applications (e.g., drought or deforestation monitoring) mandates integration within an operational monitoring framework. Furthermore, the real-time monitoring method is implemented in open-source environment and is freely available in the BFAST package for R software. Information illustrating how to apply the method on satellite image time series are available at http://bfast.R-Forge.R-project.org/ and the example section of the bfastmonitor() function within the BFAST package.

  1. Design and realization of retina-like three-dimensional imaging based on a MOEMS mirror

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Hao, Qun; Xia, Wenze; Peng, Yuxin; Cheng, Yang; Mu, Jiaxing; Wang, Peng

    2016-07-01

    To balance conflicts for high-resolution, large-field-of-view and real-time imaging, a retina-like imaging method based on time-of flight (TOF) is proposed. Mathematical models of 3D imaging based on MOEMS are developed. Based on this method, we perform simulations of retina-like scanning properties, including compression of redundant information and rotation and scaling invariance. To validate the theory, we develop a prototype and conduct relevant experiments. The preliminary results agree well with the simulations.

  2. [Mobile hospital -real time mobile telehealthcare system with ultrasound and CT van using high-speed satellite communication-].

    PubMed

    Takizawa, Masaomi; Miyashita, Toyohisa; Murase, Sumio; Kanda, Hirohito; Karaki, Yoshiaki; Yagi, Kazuo; Ohue, Toru

    2003-01-01

    A real-time telescreening system is developed to detect early diseases for rural area residents using two types of mobile vans with a portable satellite station. The system consists of a satellite communication system with 1.5Mbps of the JCSAT-1B satellite, a spiral CT van, an ultrasound imaging van with two video conference system, a DICOM server and a multicast communication unit. The video image and examination image data are transmitted from the van to hospitals and the university simultaneously. Physician in the hospital observes and interprets exam images from the van and watches the video images of the position of ultrasound transducer on screenee in the van. After the observation images, physician explains a results of the examination by the video conference system. Seventy lung CT screening and 203 ultrasound screening were done from March to June 2002. The trial of this real time screening suggested that rural residents are given better healthcare without visit to the hospital. And it will open the gateway to reduce the medical cost and medical divide between city area and rural area.

  3. Real-time hyperspectral imaging for food safety applications

    USDA-ARS?s Scientific Manuscript database

    Multispectral imaging systems with selected bands can commonly be used for real-time applications of food processing. Recent research has demonstrated several image processing methods including binning, noise removal filter, and appropriate morphological analysis in real-time mode can remove most fa...

  4. Development of a high-speed VCSEL OCT system for real-time imaging of conscious patients larynx using a hand-held probe (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Rangarajan, Swathi; Chou, Li-Dek; Coughlan, Carolyn; Sharma, Giriraj; Wong, Brian J. F.; Ramalingam, Tirunelveli S.

    2016-02-01

    Fourier domain optical coherence tomography (FD-OCT) is a noninvasive imaging modality that has previously been used to image the human larynx. However, differences in anatomical geometry and short imaging range of conventional OCT limits its application in a clinical setting. In order to address this issue, we have developed a gradient-index (GRIN) lens rod-based hand-held probe in conjunction with a long imaging range 200 kHz Vertical-Cavity Surface Emitting Lasers (VCSEL) swept-source optical coherence tomography (SS-OCT) system for high speed real-time imaging of the human larynx in an office setting. This hand-held probe is designed to have a long and dynamically tunable working distance to accommodate the differences in anatomical geometry of human test subjects. A nominal working distance (~6 cm) of the probe is selected to have a lateral resolution <100 um within a depth of focus of 6.4 mm, which covers more than half of the 12 mm imaging range of the VCSEL laser. The maximum lateral scanning range of the probe at 6 cm working distance is approximately 8.4 mm, and imaging an area of 8.5 mm by 8.5 mm is accomplished within a second. Using the above system, we will demonstrate real-time cross-sectional OCT imaging of larynx during phonation in vivo in human and ex-vivo in pig vocal folds.

  5. Portable wide-field hand-held NIR scanner

    NASA Astrophysics Data System (ADS)

    Jung, Young-Jin; Roman, Manuela; Carrasquilla, Jennifer; Erickson, Sarah J.; Godavarty, Anuradha

    2013-03-01

    Near-infrared (NIR) optical imaging modality is one of the widely used medical imaging techniques for breast cancer imaging, functional brain mapping, and many other applications. However, conventional NIR imaging systems are bulky and expensive, thereby limiting their accelerated clinical translation. Herein a new compact (6 × 7 × 12 cm3), cost-effective, and wide-field NIR scanner has been developed towards contact as well as no-contact based real-time imaging in both reflectance and transmission mode. The scanner mainly consists of an NIR source light (between 700- 900 nm), an NIR sensitive CCD camera, and a custom-developed image acquisition and processing software to image an area of 12 cm2. Phantom experiments have been conducted to estimate the feasibility of diffuse optical imaging by using Indian-Ink as absorption-based contrast agents. As a result, the developed NIR system measured the light intensity change in absorption-contrasted target up to 4 cm depth under transillumination mode. Preliminary in-vivo studies demonstrated the feasibility of real-time monitoring of blood flow changes. Currently, extensive in-vivo studies are carried out using the ultra-portable NIR scanner in order to assess the potential of the imager towards breast imaging..

  6. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk; Ringgaard, Steffen; Sørensen, Thomas Sangild

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (ormore » tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal, and coronal 2D MRI series yielded 3D respiratory motion curves for all volunteers. The motion directionality and amplitude were very similar when measured directly as in-plane motion or estimated indirectly as through-plane motion. The mean peak-to-peak breathing amplitude was 1.6 mm (left-right), 11.0 mm (craniocaudal), and 2.5 mm (anterior-posterior). The position of the watermelon structure was estimated in 2D MRI images with a root-mean-square error of 0.52 mm (in-plane) and 0.87 mm (through-plane). Conclusions: A method for 3D tracking in 2D MRI series was developed and demonstrated for liver tracking in volunteers. The method would allow real-time 3D localization with integrated MR-Linac systems.« less

  7. An Efficient Framework for Compressed Sensing Reconstruction of Highly Accelerated Dynamic Cardiac MRI

    NASA Astrophysics Data System (ADS)

    Ting, Samuel T.

    The research presented in this work seeks to develop, validate, and deploy practical techniques for improving diagnosis of cardiovascular disease. In the philosophy of biomedical engineering, we seek to identify an existing medical problem having significant societal and economic effects and address this problem using engineering approaches. Cardiovascular disease is the leading cause of mortality in the United States, accounting for more deaths than any other major cause of death in every year since 1900 with the exception of the year 1918. Cardiovascular disease is estimated to account for almost one-third of all deaths in the United States, with more than 2150 deaths each day, or roughly 1 death every 40 seconds. In the past several decades, a growing array of imaging modalities have proven useful in aiding the diagnosis and evaluation of cardiovascular disease, including computed tomography, single photon emission computed tomography, and echocardiography. In particular, cardiac magnetic resonance imaging is an excellent diagnostic tool that can provide within a single exam a high quality evaluation of cardiac function, blood flow, perfusion, viability, and edema without the use of ionizing radiation. The scope of this work focuses on the application of engineering techniques for improving imaging using cardiac magnetic resonance with the goal of improving the utility of this powerful imaging modality. Dynamic cine imaging, or the capturing of movies of a single slice or volume within the heart or great vessel region, is used in nearly every cardiac magnetic resonance imaging exam, and adequate evaluation of cardiac function and morphology for diagnosis and evaluation of cardiovascular disease depends heavily on both the spatial and temporal resolution as well as the image quality of the reconstruction cine images. This work focuses primarily on image reconstruction techniques utilized in cine imaging; however, the techniques discussed are also relevant to other dynamic and static imaging techniques based on cardiac magnetic resonance. Conventional segmented techniques for cardiac cine imaging require breath-holding as well as regular cardiac rhythm, and can be time-consuming to acquire. Inadequate breath-holding or irregular cardiac rhythm can result in completely non-diagnostic images, limiting the utility of these techniques in a significant patient population. Real-time single-shot cardiac cine imaging enables free-breathing acquisition with significantly shortened imaging time and promises to significantly improve the utility of cine imaging for diagnosis and evaluation of cardiovascular disease. However, utility of real-time cine images depends heavily on the successful reconstruction of final cine images from undersampled data. Successful reconstruction of images from more highly undersampled data results directly in images exhibiting finer spatial and temporal resolution provided that image quality is sufficient. This work focuses primarily on the development, validation, and deployment of practical techniques for enabling the reconstruction of real-time cardiac cine images at the spatial and temporal resolutions and image quality needed for diagnostic utility. Particular emphasis is placed on the development of reconstruction approaches resulting in with short computation times that can be used in the clinical environment. Specifically, the use of compressed sensing signal recovery techniques is considered; such techniques show great promise in allowing successful reconstruction of highly undersampled data. The scope of this work concerns two primary topics related to signal recovery using compressed sensing: (1) long reconstruction times of these techniques, and (2) improved sparsity models for signal recovery from more highly undersampled data. Both of these aspects are relevant to the practical application of compressed sensing techniques in the context of improving image reconstruction of real-time cardiac cine images. First, algorithmic and implementational approaches are proposed for reducing the computational time for a compressed sensing reconstruction framework. Specific optimization algorithms based on the fast iterative/shrinkage algorithm (FISTA) are applied in the context of real-time cine image reconstruction to achieve efficient per-iteration computation time. Implementation within a code framework utilizing commercially available graphics processing units (GPUs) allows for practical and efficient implementation directly within the clinical environment. Second, patch-based sparsity models are proposed to enable compressed sensing signal recovery from highly undersampled data. Numerical studies demonstrate that this approach can help improve image quality at higher undersampling ratios, enabling real-time cine imaging at higher acceleration rates. In this work, it is shown that these techniques yield a holistic framework for achieving efficient reconstruction of real-time cine images with spatial and temporal resolution sufficient for use in the clinical environment. A thorough description of these techniques from both a theoretical and practical view is provided - both of which may be of interest to the reader in terms of future work.

  8. A micromachined silicon parallel acoustic delay line (PADL) array for real-time photoacoustic tomography (PAT)

    NASA Astrophysics Data System (ADS)

    Cho, Young Y.; Chang, Cheng-Chung; Wang, Lihong V.; Zou, Jun

    2015-03-01

    To achieve real-time photoacoustic tomography (PAT), massive transducer arrays and data acquisition (DAQ) electronics are needed to receive the PA signals simultaneously, which results in complex and high-cost ultrasound receiver systems. To address this issue, we have developed a new PA data acquisition approach using acoustic time delay. Optical fibers were used as parallel acoustic delay lines (PADLs) to create different time delays in multiple channels of PA signals. This makes the PA signals reach a single-element transducer at different times. As a result, they can be properly received by single-channel DAQ electronics. However, due to their small diameter and fragility, using optical fiber as acoustic delay lines poses a number of challenges in the design, construction and packaging of the PADLs, thereby limiting their performances and use in real imaging applications. In this paper, we report the development of new silicon PADLs, which are directly made from silicon wafers using advanced micromachining technologies. The silicon PADLs have very low acoustic attenuation and distortion. A linear array of 16 silicon PADLs were assembled into a handheld package with one common input port and one common output port. To demonstrate its real-time PAT capability, the silicon PADL array (with its output port interfaced with a single-element transducer) was used to receive 16 channels of PA signals simultaneously from a tissue-mimicking optical phantom sample. The reconstructed PA image matches well with the imaging target. Therefore, the silicon PADL array can provide a 16× reduction in the ultrasound DAQ channels for real-time PAT.

  9. Real-Time Monitoring and Evaluation of a Visual-Based Cervical Cancer Screening Program Using a Decision Support Job Aid.

    PubMed

    Peterson, Curtis W; Rose, Donny; Mink, Jonah; Levitz, David

    2016-05-16

    In many developing nations, cervical cancer screening is done by visual inspection with acetic acid (VIA). Monitoring and evaluation (M&E) of such screening programs is challenging. An enhanced visual assessment (EVA) system was developed to augment VIA procedures in low-resource settings. The EVA System consists of a mobile colposcope built around a smartphone, and an online image portal for storing and annotating images. A smartphone app is used to control the mobile colposcope, and upload pictures to the image portal. In this paper, a new app feature that documents clinical decisions using an integrated job aid was deployed in a cervical cancer screening camp in Kenya. Six organizations conducting VIA used the EVA System to screen 824 patients over the course of a week, and providers recorded their diagnoses and treatments in the application. Real-time aggregated statistics were broadcast on a public website. Screening organizations were able to assess the number of patients screened, alongside treatment rates, and the patients who tested positive and required treatment in real time, which allowed them to make adjustments as needed. The real-time M&E enabled by "smart" diagnostic medical devices holds promise for broader use in screening programs in low-resource settings.

  10. Real-time cardiovascular magnetic resonance at high temporal resolution: radial FLASH with nonlinear inverse reconstruction.

    PubMed

    Zhang, Shuo; Uecker, Martin; Voit, Dirk; Merboldt, Klaus-Dietmar; Frahm, Jens

    2010-07-08

    Functional assessments of the heart by dynamic cardiovascular magnetic resonance (CMR) commonly rely on (i) electrocardiographic (ECG) gating yielding pseudo real-time cine representations, (ii) balanced gradient-echo sequences referred to as steady-state free precession (SSFP), and (iii) breath holding or respiratory gating. Problems may therefore be due to the need for a robust ECG signal, the occurrence of arrhythmia and beat to beat variations, technical instabilities (e.g., SSFP "banding" artefacts), and limited patient compliance and comfort. Here we describe a new approach providing true real-time CMR with image acquisition times as short as 20 to 30 ms or rates of 30 to 50 frames per second. The approach relies on a previously developed real-time MR method, which combines a strongly undersampled radial FLASH CMR sequence with image reconstruction by regularized nonlinear inversion. While iterative reconstructions are currently performed offline due to limited computer speed, online monitoring during scanning is accomplished using gridding reconstructions with a sliding window at the same frame rate but with lower image quality. Scans of healthy young subjects were performed at 3 T without ECG gating and during free breathing. The resulting images yield T1 contrast (depending on flip angle) with an opposed-phase or in-phase condition for water and fat signals (depending on echo time). They completely avoid (i) susceptibility-induced artefacts due to the very short echo times, (ii) radiofrequency power limitations due to excitations with flip angles of 10 degrees or less, and (iii) the risk of peripheral nerve stimulation due to the use of normal gradient switching modes. For a section thickness of 8 mm, real-time images offer a spatial resolution and total acquisition time of 1.5 mm at 30 ms and 2.0 mm at 22 ms, respectively. Though awaiting thorough clinical evaluation, this work describes a robust and flexible acquisition and reconstruction technique for real-time CMR at the ultimate limit of this technology.

  11. Image segmentation based upon topological operators: real-time implementation case study

    NASA Astrophysics Data System (ADS)

    Mahmoudi, R.; Akil, M.

    2009-02-01

    In miscellaneous applications of image treatment, thinning and crest restoring present a lot of interests. Recommended algorithms for these procedures are those able to act directly over grayscales images while preserving topology. But their strong consummation in term of time remains the major disadvantage in their choice. In this paper we present an efficient hardware implementation on RISC processor of two powerful algorithms of thinning and crest restoring developed by our team. Proposed implementation enhances execution time. A chain of segmentation applied to medical imaging will serve as a concrete example to illustrate the improvements brought thanks to the optimization techniques in both algorithm and architectural levels. The particular use of the SSE instruction set relative to the X86_32 processors (PIV 3.06 GHz) will allow a best performance for real time processing: a cadency of 33 images (512*512) per second is assured.

  12. A Novel, Real-Time, In Vivo Mouse Retinal Imaging System

    PubMed Central

    Butler, Mark C.; Sullivan, Jack M.

    2015-01-01

    Purpose To develop an efficient, low-cost instrument for robust real-time imaging of the mouse retina in vivo, and assess system capabilities by evaluating various animal models. Methods Following multiple disappointing attempts to visualize the mouse retina during a subretinal injection using commercially available systems, we identified the key limitation to be inadequate illumination due to off axis illumination and poor optical train optimization. Therefore, we designed a paraxial illumination system for Greenough-type stereo dissecting microscope incorporating an optimized optical launch and an efficiently coupled fiber optic delivery system. Excitation and emission filters control spectral bandwidth. A color coupled-charged device (CCD) camera is coupled to the microscope for image capture. Although, field of view (FOV) is constrained by the small pupil aperture, the high optical power of the mouse eye, and the long working distance (needed for surgical manipulations), these limitations can be compensated by eye positioning in order to observe the entire retina. Results The retinal imaging system delivers an adjustable narrow beam to the dilated pupil with minimal vignetting. The optic nerve, vasculature, and posterior pole are crisply visualized and the entire retina can be observed through eye positioning. Normal and degenerative retinal phenotypes can be followed over time. Subretinal or intraocular injection procedures are followed in real time. Real-time, intravenous fluorescein angiography for the live mouse has been achieved. Conclusions A novel device is established for real-time viewing and image capture of the small animal retina during subretinal injections for preclinical gene therapy studies. PMID:26551329

  13. Single sensor that outputs narrowband multispectral images

    PubMed Central

    Kong, Linghua; Yi, Dingrong; Sprigle, Stephen; Wang, Fengtao; Wang, Chao; Liu, Fuhan; Adibi, Ali; Tummala, Rao

    2010-01-01

    We report the work of developing a hand-held (or miniaturized), low-cost, stand-alone, real-time-operation, narrow bandwidth multispectral imaging device for the detection of early stage pressure ulcers. PMID:20210418

  14. A street rubbish detection algorithm based on Sift and RCNN

    NASA Astrophysics Data System (ADS)

    Yu, XiPeng; Chen, Zhong; Zhang, Shuo; Zhang, Ting

    2018-02-01

    This paper presents a street rubbish detection algorithm based on image registration with Sift feature and RCNN. Firstly, obtain the rubbish region proposal on the real-time street image and set up the CNN convolution neural network trained by the rubbish samples set consists of rubbish and non-rubbish images; Secondly, for every clean street image, obtain the Sift feature and do image registration with the real-time street image to obtain the differential image, the differential image filters a lot of background information, obtain the rubbish region proposal rect where the rubbish may appear on the differential image by the selective search algorithm. Then, the CNN model is used to detect the image pixel data in each of the region proposal on the real-time street image. According to the output vector of the CNN, it is judged whether the rubbish is in the region proposal or not. If it is rubbish, the region proposal on the real-time street image is marked. This algorithm avoids the large number of false detection caused by the detection on the whole image because the CNN is used to identify the image only in the region proposal on the real-time street image that may appear rubbish. Different from the traditional object detection algorithm based on the region proposal, the region proposal is obtained on the differential image not whole real-time street image, and the number of the invalid region proposal is greatly reduced. The algorithm has the high mean average precision (mAP).

  15. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    PubMed

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  16. SU-F-T-41: 3D MTP-TRUS for Prostate Implant

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, P

    Purpose: Prostate brachytherapy is an effective treatment for early prostate cancer. The current prostate implant is limited to using 2D transrectal ultrassound (TRUS) or machenical motor driven 2D array either in the end or on the side. Real-time 3D images can improve the accuracy of the guidance of prostate implant. The concept of our system is to allow realtime full visualization of the entire prostate with the multiple transverse scan. Methods: The prototype of 3D Multiple-Transverse-Plane Transrectal Ultrasound probe (MTP-TRUS) has been designed by us and manufactured by Blatek inc. It has 7 convex linear arrays and each array hasmore » 96 elements. It is connected to cQuest Fire bird research system (Cephasonics inc.) which is a flexible and configurable ultrasound-development platform. The size of cQuest Firebird system is compact and supports the real-time wireless image transferring. A relay based mux board is designed for the cQuest Firebird system to be able to connect 672 elements. Results: The center frequency of probe is 6MHz±10%. The diameter of probe is 3cm and the length is 20cm. The element pitch is 0.205 mm. Array focus is 30mm and spacing 1.6cm. The beam data for each array was measured and met our expectation. The interface board of MTP-TURS is made and able to connect to cQuest Firebird system. The image display interface is still under the development. Our real-time needle tracking algorithm will be implemented too. Conclusion: Our MTP-TRUS system for prostate implant will be able to acquire real-time 3D images of prostate and do the real-time needle segmentation and tracking. The system is compact and have wireless function.« less

  17. Performance assessment of 3D surface imaging technique for medical imaging applications

    NASA Astrophysics Data System (ADS)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Recent development in optical 3D surface imaging technologies provide better ways to digitalize the 3D surface and its motion in real-time. The non-invasive 3D surface imaging approach has great potential for many medical imaging applications, such as motion monitoring of radiotherapy, pre/post evaluation of plastic surgery and dermatology, to name a few. Various commercial 3D surface imaging systems have appeared on the market with different dimension, speed and accuracy. For clinical applications, the accuracy, reproducibility and robustness across the widely heterogeneous skin color, tone, texture, shape properties, and ambient lighting is very crucial. Till now, a systematic approach for evaluating the performance of different 3D surface imaging systems still yet exist. In this paper, we present a systematic performance assessment approach to 3D surface imaging system assessment for medical applications. We use this assessment approach to exam a new real-time surface imaging system we developed, dubbed "Neo3D Camera", for image-guided radiotherapy (IGRT). The assessments include accuracy, field of view, coverage, repeatability, speed and sensitivity to environment, texture and color.

  18. Parallel-hierarchical processing and classification of laser beam profile images based on the GPU-oriented architecture

    NASA Astrophysics Data System (ADS)

    Yarovyi, Andrii A.; Timchenko, Leonid I.; Kozhemiako, Volodymyr P.; Kokriatskaia, Nataliya I.; Hamdi, Rami R.; Savchuk, Tamara O.; Kulyk, Oleksandr O.; Surtel, Wojciech; Amirgaliyev, Yedilkhan; Kashaganova, Gulzhan

    2017-08-01

    The paper deals with a problem of insufficient productivity of existing computer means for large image processing, which do not meet modern requirements posed by resource-intensive computing tasks of laser beam profiling. The research concentrated on one of the profiling problems, namely, real-time processing of spot images of the laser beam profile. Development of a theory of parallel-hierarchic transformation allowed to produce models for high-performance parallel-hierarchical processes, as well as algorithms and software for their implementation based on the GPU-oriented architecture using GPGPU technologies. The analyzed performance of suggested computerized tools for processing and classification of laser beam profile images allows to perform real-time processing of dynamic images of various sizes.

  19. Acoustic radiation force impulse imaging for real-time observation of lesion development during radiofrequency ablation procedures

    NASA Astrophysics Data System (ADS)

    Fahey, Brian J.; Trahey, Gregg E.

    2005-04-01

    When performing radiofrequency ablation (RFA) procedures, physicians currently have little or no feedback concerning the success of the treatment until follow-up assessments are made days to weeks later. To be successful, RFA must induce a thermal lesion of sufficient volume to completely destroy a target tumor or completely isolate an aberrant cardiac pathway. Although ultrasound, computed tomography (CT), and CT-based fluoroscopy have found use in guiding RFA treatments, they are deficient in giving accurate assessments of lesion size or boundaries during procedures. As induced thermal lesion size can vary considerably from patient to patient, the current lack of real-time feedback during RFA procedures is troublesome. We have developed a technique for real-time monitoring of thermal lesion size during RFA procedures utilizing acoustic radiation force impulse (ARFI) imaging. In both ex vivo and in vivo tissues, ARFI imaging provided better thermal lesion contrast and better overall appreciation for lesion size and boundaries relative to conventional sonography. The thermal safety of ARFI imaging for use at clinically realistic depths was also verified through the use of finite element method models. As ARFI imaging is implemented entirely on a diagnostic ultrasound scanner, it is a convenient, inexpensive, and promising modality for monitoring RFA procedures in vivo.

  20. 4-D photoacoustic tomography.

    PubMed

    Xiang, Liangzhong; Wang, Bo; Ji, Lijun; Jiang, Huabei

    2013-01-01

    Photoacoustic tomography (PAT) offers three-dimensional (3D) structural and functional imaging of living biological tissue with label-free, optical absorption contrast. These attributes lend PAT imaging to a wide variety of applications in clinical medicine and preclinical research. Despite advances in live animal imaging with PAT, there is still a need for 3D imaging at centimeter depths in real-time. We report the development of four dimensional (4D) PAT, which integrates time resolutions with 3D spatial resolution, obtained using spherical arrays of ultrasonic detectors. The 4D PAT technique generates motion pictures of imaged tissue, enabling real time tracking of dynamic physiological and pathological processes at hundred micrometer-millisecond resolutions. The 4D PAT technique is used here to image needle-based drug delivery and pharmacokinetics. We also use this technique to monitor 1) fast hemodynamic changes during inter-ictal epileptic seizures and 2) temperature variations during tumor thermal therapy.

  1. UWGSP7: a real-time optical imaging workstation

    NASA Astrophysics Data System (ADS)

    Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.

    1995-04-01

    With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board that supports the RS-170 standard. The image acquisition board is connected to the image- processing board using a direct connect port which provides a 66 Mbytes/s channel independent of the system bus. The image processing board utilizes the Texas Instruments TMS320C80 Multimedia Video Processor chip. This chip is capable of 2 billion operations per second providing the UWGSP7 with the capability to perform real-time image processing functions like median filtering, convolution and contrast enhancement. This processing power allows interactive analysis of the experiments as compared to current practice of off-line processing and analysis. Due to its flexibility and programmability, the UWGSP7 can be adapted into various research needs in intraoperative optical imaging.

  2. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  3. Prototype of a single probe Compton camera for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.

    2017-02-01

    Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.

  4. Classification and overview of research in real-time imaging

    NASA Astrophysics Data System (ADS)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  5. The smartphone brain scanner: a portable real-time neuroimaging system.

    PubMed

    Stopczynski, Arkadiusz; Stahlhut, Carsten; Larsen, Jakob Eg; Petersen, Michael Kai; Hansen, Lars Kai

    2014-01-01

    Combining low-cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. Here we present the technical details and validation of a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system--Smartphone Brain Scanner--combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully portable system for real-time 3D EEG imaging. We discuss the benefits and challenges, including technical limitations as well as details of real-time reconstruction of 3D images of brain activity. We present examples of brain activity captured in a simple experiment involving imagined finger tapping, which shows that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using an off-the-shelf consumer neuroheadset is lower than the signal obtained using high-density standard EEG equipment, we propose mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings.

  6. Real-time X-ray Diffraction: Applications to Materials Characterization

    NASA Technical Reports Server (NTRS)

    Rosemeier, R. G.

    1984-01-01

    With the high speed growth of materials it becomes necessary to develop measuring systems which also have the capabilities of characterizing these materials at high speeds. One of the conventional techniques of characterizing materials was X-ray diffraction. Film, which is the oldest method of recording the X-ray diffraction phenomenon, is not quite adequate in most circumstances to record fast changing events. Even though conventional proportional counters and scintillation counters can provide the speed necessary to record these changing events, they lack the ability to provide image information which may be important in some types of experiment or production arrangements. A selected number of novel applications of using X-ray diffraction to characterize materials in real-time are discussed. Also, device characteristics of some X-ray intensifiers useful in instantaneous X-ray diffraction applications briefly presented. Real-time X-ray diffraction experiments with the incorporation of image X-ray intensification add a new dimension in the characterization of materials. The uses of real-time image intensification in laboratory and production arrangements are quite unlimited and their application depends more upon the ingenuity of the scientist or engineer.

  7. Determination of Exterior Orientation Parameters Through Direct Geo-Referencing in a Real-Time Aerial Monitoring System

    NASA Astrophysics Data System (ADS)

    Kim, H.; Lee, J.; Choi, K.; Lee, I.

    2012-07-01

    Rapid responses for emergency situations such as natural disasters or accidents often require geo-spatial information describing the on-going status of the affected area. Such geo-spatial information can be promptly acquired by a manned or unmanned aerial vehicle based multi-sensor system that can monitor the emergent situations in near real-time from the air using several kinds of sensors. Thus, we are in progress of developing such a real-time aerial monitoring system (RAMS) consisting of both aerial and ground segments. The aerial segment acquires the sensory data about the target areas by a low-altitude helicopter system equipped with sensors such as a digital camera and a GPS/IMU system and transmits them to the ground segment through a RF link in real-time. The ground segment, which is a deployable ground station installed on a truck, receives the sensory data and rapidly processes them to generate ortho-images, DEMs, etc. In order to generate geo-spatial information, in this system, exterior orientation parameters (EOP) of the acquired images are obtained through direct geo-referencing because it is difficult to acquire coordinates of ground points in disaster area. The main process, since the data acquisition stage until the measurement of EOP, is discussed as follows. First, at the time of data acquisition, image acquisition time synchronized by GPS time is recorded as part of image file name. Second, the acquired data are then transmitted to the ground segment in real-time. Third, by processing software for ground segment, positions/attitudes of acquired images are calculated through a linear interpolation using the GPS time of the received position/attitude data and images. Finally, the EOPs of images are obtained from position/attitude data by deriving the relationships between a camera coordinate system and a GPS/IMU coordinate system. In this study, we evaluated the accuracy of the EOP decided by direct geo-referencing in our system. To perform this, we used the precisely calculated EOP through the digital photogrammetry workstation (DPW) as reference data. The results of the evaluation indicate that the accuracy of the EOP acquired by our system is reasonable in comparison with the performance of GPS/IMU system. Also our system can acquire precise multi-sensory data to generate the geo-spatial information in emergency situations. In the near future, we plan to complete the development of the rapid generation system of the ground segment. Our system is expected to be able to acquire the ortho-image and DEM on the damaged area in near real-time. Its performance along with the accuracy of the generated geo-spatial information will also be evaluated and reported in the future work.

  8. EO/IR scene generation open source initiative for real-time hardware-in-the-loop and all-digital simulation

    NASA Astrophysics Data System (ADS)

    Morris, Joseph W.; Lowry, Mac; Boren, Brett; Towers, James B.; Trimble, Darian E.; Bunfield, Dennis H.

    2011-06-01

    The US Army Aviation and Missile Research, Development and Engineering Center (AMRDEC) and the Redstone Test Center (RTC) has formed the Scene Generation Development Center (SGDC) to support the Department of Defense (DoD) open source EO/IR Scene Generation initiative for real-time hardware-in-the-loop and all-digital simulation. Various branches of the DoD have invested significant resources in the development of advanced scene and target signature generation codes. The SGDC goal is to maintain unlimited government rights and controlled access to government open source scene generation and signature codes. In addition, the SGDC provides development support to a multi-service community of test and evaluation (T&E) users, developers, and integrators in a collaborative environment. The SGDC has leveraged the DoD Defense Information Systems Agency (DISA) ProjectForge (https://Project.Forge.mil) which provides a collaborative development and distribution environment for the DoD community. The SGDC will develop and maintain several codes for tactical and strategic simulation, such as the Joint Signature Image Generator (JSIG), the Multi-spectral Advanced Volumetric Real-time Imaging Compositor (MAVRIC), and Office of the Secretary of Defense (OSD) Test and Evaluation Science and Technology (T&E/S&T) thermal modeling and atmospherics packages, such as EOView, CHARM, and STAR. Other utility packages included are the ContinuumCore for real-time messaging and data management and IGStudio for run-time visualization and scenario generation.

  9. Real-time image-processing algorithm for markerless tumour tracking using X-ray fluoroscopic imaging.

    PubMed

    Mori, S

    2014-05-01

    To ensure accuracy in respiratory-gating treatment, X-ray fluoroscopic imaging is used to detect tumour position in real time. Detection accuracy is strongly dependent on image quality, particularly positional differences between the patient and treatment couch. We developed a new algorithm to improve the quality of images obtained in X-ray fluoroscopic imaging and report the preliminary results. Two oblique X-ray fluoroscopic images were acquired using a dynamic flat panel detector (DFPD) for two patients with lung cancer. The weighting factor was applied to the DFPD image in respective columns, because most anatomical structures, as well as the treatment couch and port cover edge, were aligned in the superior-inferior direction when the patient lay on the treatment couch. The weighting factors for the respective columns were varied until the standard deviation of the pixel values within the image region was minimized. Once the weighting factors were calculated, the quality of the DFPD image was improved by applying the factors to multiframe images. Applying the image-processing algorithm produced substantial improvement in the quality of images, and the image contrast was increased. The treatment couch and irradiation port edge, which were not related to a patient's position, were removed. The average image-processing time was 1.1 ms, showing that this fast image processing can be applied to real-time tumour-tracking systems. These findings indicate that this image-processing algorithm improves the image quality in patients with lung cancer and successfully removes objects not related to the patient. Our image-processing algorithm might be useful in improving gated-treatment accuracy.

  10. Real time radiotherapy verification with Cherenkov imaging: development of a system for beamlet verification

    NASA Astrophysics Data System (ADS)

    Pogue, B. W.; Krishnaswamy, V.; Jermyn, M.; Bruza, P.; Miao, T.; Ware, William; Saunders, S. L.; Andreozzi, J. M.; Gladstone, D. J.; Jarvis, L. A.

    2017-05-01

    Cherenkov imaging has been shown to allow near real time imaging of the beam entrance and exit on patient tissue, with the appropriate intensified camera and associated image processing. A dedicated system has been developed for research into full torso imaging of whole breast irradiation, where the dual camera system captures the beam shape for all beamlets used in this treatment protocol. Particularly challenging verification measurement exists in dynamic wedge, field in field, and boost delivery, and the system was designed to capture these as they are delivered. Two intensified CMOS (ICMOS) cameras were developed and mounted in a breast treatment room, and pilot studies for intensity and stability were completed. Software tools to contour the treatment area have been developed and are being tested prior to initiation of the full trial. At present, it is possible to record delivery of individual beamlets as small as a single MLC thickness, and readout at 20 frames per second is achieved. Statistical analysis of system repeatibilty and stability is presented, as well as pilot human studies.

  11. Image-guided laparoscopic surgery in an open MRI operating theater.

    PubMed

    Tsutsumi, Norifumi; Tomikawa, Morimasa; Uemura, Munenori; Akahoshi, Tomohiko; Nagao, Yoshihiro; Konishi, Kozo; Ieiri, Satoshi; Hong, Jaesung; Maehara, Yoshihiko; Hashizume, Makoto

    2013-06-01

    The recent development of open magnetic resonance imaging (MRI) has provided an opportunity for the next stage of image-guided surgical and interventional procedures. The purpose of this study was to evaluate the feasibility of laparoscopic surgery under the pneumoperitoneum with the system of an open MRI operating theater. Five patients underwent laparoscopic surgery with a real-time augmented reality navigation system that we previously developed in a horizontal-type 0.4-T open MRI operating theater. All procedures were performed in an open MRI operating theater. During the operations, the laparoscopic monitor clearly showed the augmented reality models of the intraperitoneal structures, such as the common bile ducts and the urinary bladder, as well as the proper positions of the prosthesis. The navigation frame rate was 8 frames per min. The mean fiducial registration error was 6.88 ± 6.18 mm in navigated cases. We were able to use magnetic resonance-incompatible surgical instruments out of the 5-Gs restriction area, as well as conventional laparoscopic surgery, and we developed a real-time augmented reality navigation system using open MRI. Laparoscopic surgery with our real-time augmented reality navigation system in the open MRI operating theater is a feasible option.

  12. A method for real-time visual stimulus selection in the study of cortical object perception.

    PubMed

    Leeds, Daniel D; Tarr, Michael J

    2016-06-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit's image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across pre-determined 1cm(3) rain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds et al., 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) real-time estimation of cortical responses to stimuli is reasonably consistent; 3) search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. A method for real-time visual stimulus selection in the study of cortical object perception

    PubMed Central

    Leeds, Daniel D.; Tarr, Michael J.

    2016-01-01

    The properties utilized by visual object perception in the mid- and high-level ventral visual pathway are poorly understood. To better establish and explore possible models of these properties, we adopt a data-driven approach in which we repeatedly interrogate neural units using functional Magnetic Resonance Imaging (fMRI) to establish each unit’s image selectivity. This approach to imaging necessitates a search through a broad space of stimulus properties using a limited number of samples. To more quickly identify the complex visual features underlying human cortical object perception, we implemented a new functional magnetic resonance imaging protocol in which visual stimuli are selected in real-time based on BOLD responses to recently shown images. Two variations of this protocol were developed, one relying on natural object stimuli and a second based on synthetic object stimuli, both embedded in feature spaces based on the complex visual properties of the objects. During fMRI scanning, we continuously controlled stimulus selection in the context of a real-time search through these image spaces in order to maximize neural responses across predetermined 1 cm3 brain regions. Elsewhere we have reported the patterns of cortical selectivity revealed by this approach (Leeds 2014). In contrast, here our objective is to present more detailed methods and explore the technical and biological factors influencing the behavior of our real-time stimulus search. We observe that: 1) Searches converged more reliably when exploring a more precisely parameterized space of synthetic objects; 2) Real-time estimation of cortical responses to stimuli are reasonably consistent; 3) Search behavior was acceptably robust to delays in stimulus displays and subject motion effects. Overall, our results indicate that real-time fMRI methods may provide a valuable platform for continuing study of localized neural selectivity, both for visual object representation and beyond. PMID:26973168

  14. Low Cost Embedded Stereo System for Underwater Surveys

    NASA Astrophysics Data System (ADS)

    Nawaf, M. M.; Boï, J.-M.; Merad, D.; Royer, J.-P.; Drap, P.

    2017-11-01

    This paper provides details of both hardware and software conception and realization of a hand-held stereo embedded system for underwater imaging. The designed system can run most image processing techniques smoothly in real-time. The developed functions provide direct visual feedback on the quality of the taken images which helps taking appropriate actions accordingly in terms of movement speed and lighting conditions. The proposed functionalities can be easily customized or upgraded whereas new functions can be easily added thanks to the available supported libraries. Furthermore, by connecting the designed system to a more powerful computer, a real-time visual odometry can run on the captured images to have live navigation and site coverage map. We use a visual odometry method adapted to low computational resources systems and long autonomy. The system is tested in a real context and showed its robustness and promising further perspectives.

  15. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  16. Real-time Crystal Growth Visualization and Quantification by Energy-Resolved Neutron Imaging

    DOE PAGES

    Tremsin, Anton S.; Perrodin, Didier; Losko, Adrian S.; ...

    2017-04-20

    Energy-resolved neutron imaging is investigated as a real-time diagnostic tool for visualization and in-situ measurements of "blind" processes. This technique is demonstrated for the Bridgman-type crystal growth enabling remote and direct measurements of growth parameters crucial for process optimization. The location and shape of the interface between liquid and solid phases are monitored in real-time, concurrently with the measurement of elemental distribution within the growth volume and with the identification of structural features with a ~100 μm spatial resolution. Such diagnostics can substantially reduce the development time between exploratory small scale growth of new materials and their subsequent commercial production.more » This technique is widely applicable and is not limited to crystal growth processes.« less

  17. A Real-Time Ultraviolet Radiation Imaging System Using an Organic Photoconductive Image Sensor†

    PubMed Central

    Okino, Toru; Yamahira, Seiji; Yamada, Shota; Hirose, Yutaka; Odagawa, Akihiro; Kato, Yoshihisa; Tanaka, Tsuyoshi

    2018-01-01

    We have developed a real time ultraviolet (UV) imaging system that can visualize both invisible UV light and a visible (VIS) background scene in an outdoor environment. As a UV/VIS image sensor, an organic photoconductive film (OPF) imager is employed. The OPF has an intrinsically higher sensitivity in the UV wavelength region than those of conventional consumer Complementary Metal Oxide Semiconductor (CMOS) image sensors (CIS) or Charge Coupled Devices (CCD). As particular examples, imaging of hydrogen flame and of corona discharge is demonstrated. UV images overlapped on background scenes are simply made by on-board background subtraction. The system is capable of imaging weaker UV signals by four orders of magnitude than that of VIS background. It is applicable not only to future hydrogen supply stations but also to other UV/VIS monitor systems requiring UV sensitivity under strong visible radiation environment such as power supply substations. PMID:29361742

  18. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate ( C'' or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability bymore » using DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  19. Overview of the DART project

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berry, K.R.; Hansen, F.R.; Napolitano, L.M.

    1992-01-01

    DART (DSP Arrary for Reconfigurable Tasks) is a parallel architecture of two high-performance SDP (digital signal processing) chips with the flexibility to handle a wide range of real-time applications. Each of the 32-bit floating-point DSP processes in DART is programmable in a high-level languate (``C`` or Ada). We have added extensions to the real-time operating system used by DART in order to support parallel processor. The combination of high-level language programmability, a real-time operating system, and parallel processing support significantly reduces the development cost of application software for signal processing and control applications. We have demonstrated this capability by usingmore » DART to reconstruct images in the prototype VIP (Video Imaging Projectile) groundstation.« less

  20. Associative architecture for image processing

    NASA Astrophysics Data System (ADS)

    Adar, Rutie; Akerib, Avidan

    1997-09-01

    This article presents a new generation in parallel processing architecture for real-time image processing. The approach is implemented in a real time image processor chip, called the XiumTM-2, based on combining a fully associative array which provides the parallel engine with a serial RISC core on the same die. The architecture is fully programmable and can be programmed to implement a wide range of color image processing, computer vision and media processing functions in real time. The associative part of the chip is based on patented pending methodology of Associative Computing Ltd. (ACL), which condenses 2048 associative processors, each of 128 'intelligent' bits. Each bit can be a processing bit or a memory bit. At only 33 MHz and 0.6 micron manufacturing technology process, the chip has a computational power of 3 billion ALU operations per second and 66 billion string search operations per second. The fully programmable nature of the XiumTM-2 chip enables developers to use ACL tools to write their own proprietary algorithms combined with existing image processing and analysis functions from ACL's extended set of libraries.

  1. Satellite on-board real-time SAR processor prototype

    NASA Astrophysics Data System (ADS)

    Bergeron, Alain; Doucet, Michel; Harnisch, Bernd; Suess, Martin; Marchese, Linda; Bourqui, Pascal; Desnoyers, Nicholas; Legros, Mathieu; Guillot, Ludovic; Mercier, Luc; Châteauneuf, François

    2017-11-01

    A Compact Real-Time Optronic SAR Processor has been successfully developed and tested up to a Technology Readiness Level of 4 (TRL4), the breadboard validation in a laboratory environment. SAR, or Synthetic Aperture Radar, is an active system allowing day and night imaging independent of the cloud coverage of the planet. The SAR raw data is a set of complex data for range and azimuth, which cannot be compressed. Specifically, for planetary missions and unmanned aerial vehicle (UAV) systems with limited communication data rates this is a clear disadvantage. SAR images are typically processed electronically applying dedicated Fourier transformations. This, however, can also be performed optically in real-time. Originally the first SAR images were optically processed. The optical Fourier processor architecture provides inherent parallel computing capabilities allowing real-time SAR data processing and thus the ability for compression and strongly reduced communication bandwidth requirements for the satellite. SAR signal return data are in general complex data. Both amplitude and phase must be combined optically in the SAR processor for each range and azimuth pixel. Amplitude and phase are generated by dedicated spatial light modulators and superimposed by an optical relay set-up. The spatial light modulators display the full complex raw data information over a two-dimensional format, one for the azimuth and one for the range. Since the entire signal history is displayed at once, the processor operates in parallel yielding real-time performances, i.e. without resulting bottleneck. Processing of both azimuth and range information is performed in a single pass. This paper focuses on the onboard capabilities of the compact optical SAR processor prototype that allows in-orbit processing of SAR images. Examples of processed ENVISAT ASAR images are presented. Various SAR processor parameters such as processing capabilities, image quality (point target analysis), weight and size are reviewed.

  2. Robot-assisted real-time magnetic resonance image-guided transcatheter aortic valve replacement.

    PubMed

    Miller, Justin G; Li, Ming; Mazilu, Dumitru; Hunt, Tim; Horvath, Keith A

    2016-05-01

    Real-time magnetic resonance imaging (rtMRI)-guided transcatheter aortic valve replacement (TAVR) offers improved visualization, real-time imaging, and pinpoint accuracy with device delivery. Unfortunately, performing a TAVR in a MRI scanner can be a difficult task owing to limited space and an awkward working environment. Our solution was to design a MRI-compatible robot-assisted device to insert and deploy a self-expanding valve from a remote computer console. We present our preliminary results in a swine model. We used an MRI-compatible robotic arm and developed a valve delivery module. A 12-mm trocar was inserted in the apex of the heart via a subxiphoid incision. The delivery device and nitinol stented prosthesis were mounted on the robot. Two continuous real-time imaging planes provided a virtual real-time 3-dimensional reconstruction. The valve was deployed remotely by the surgeon via a graphic user interface. In this acute nonsurvival study, 8 swine underwent robot-assisted rtMRI TAVR for evaluation of feasibility. Device deployment took a mean of 61 ± 5 seconds. Postdeployment necropsy was performed to confirm correlations between imaging and actual valve positions. These results demonstrate the feasibility of robotic-assisted TAVR using rtMRI guidance. This approach may eliminate some of the challenges of performing a procedure while working inside of an MRI scanner, and may improve the success of TAVR. It provides superior visualization during the insertion process, pinpoint accuracy of deployment, and, potentially, communication between the imaging device and the robotic module to prevent incorrect or misaligned deployment. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  3. Study on real-time images compounded using spatial light modulator

    NASA Astrophysics Data System (ADS)

    Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang

    2007-01-01

    Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.

  4. A near-real-time full-parallax holographic display for remote operations

    NASA Technical Reports Server (NTRS)

    Iavecchia, Helene P.; Huff, Lloyd; Marzwell, Neville I.

    1991-01-01

    A near real-time, full parallax holographic display system was developed that has the potential to provide a 3-D display for remote handling operations in hazardous environments. The major components of the system consist of a stack of three spatial light modulators which serves as the object source of the hologram; a near real-time holographic recording material (such as thermoplastic and photopolymer); and an optical system for relaying SLM images to the holographic recording material and to the observer for viewing.

  5. Ultrahigh field magnetic resonance and colour Doppler real-time fusion imaging of the orbit--a hybrid tool for assessment of choroidal melanoma.

    PubMed

    Walter, Uwe; Niendorf, Thoralf; Graessl, Andreas; Rieger, Jan; Krüger, Paul-Christian; Langner, Sönke; Guthoff, Rudolf F; Stachs, Oliver

    2014-05-01

    A combination of magnetic resonance images with real-time high-resolution ultrasound known as fusion imaging may improve ophthalmologic examination. This study was undertaken to evaluate the feasibility of orbital high-field magnetic resonance and real-time colour Doppler ultrasound image fusion and navigation. This case study, performed between April and June 2013, included one healthy man (age, 47 years) and two patients (one woman, 57 years; one man, 67 years) with choroidal melanomas. All cases underwent 7.0-T magnetic resonance imaging using a custom-made ocular imaging surface coil. The Digital Imaging and Communications in Medicine volume data set was then loaded into the ultrasound system for manual registration of the live ultrasound image and fusion imaging examination. Data registration, matching and then volume navigation were feasible in all cases. Fusion imaging provided real-time imaging capabilities and high tissue contrast of choroidal tumour and optic nerve. It also allowed adding a real-time colour Doppler signal on magnetic resonance images for assessment of vasculature of tumour and retrobulbar structures. The combination of orbital high-field magnetic resonance and colour Doppler ultrasound image fusion and navigation is feasible. Multimodal fusion imaging promises to foster assessment and monitoring of choroidal melanoma and optic nerve disorders. • Orbital magnetic resonance and colour Doppler ultrasound real-time fusion imaging is feasible • Fusion imaging combines the spatial and temporal resolution advantages of each modality • Magnetic resonance and ultrasound fusion imaging improves assessment of choroidal melanoma vascularisation.

  6. Breakup phenomena of a coaxial jet in the non-dilute region using real-time X-ray radiography

    NASA Astrophysics Data System (ADS)

    Cheung, F. B.; Kuo, K. K.; Woodward, R. D.; Garner, K. N.

    1990-07-01

    An innovative approach to the investigation of liquid jet breakup processes in the near-injector region has been developed to overcome the experimental difficulties associated with optically opaque, dense sprays. Real-time X-ray radiography (RTR) has been employed to observe the inner structure and breakup phenomena of coaxial jets. In the atomizing regime, droplets much smaller than the exit diameter are formed beginning essentially at the injector exit. Through the use of RTR, the instantaneous contour of the liquid core was visualized. Experimental results consist of controlled-exposure digital video images of the liquid jet breakup process. Time-averaged video images have also been recorded for comparison. A digital image processing system is used to analyze the recorded images by creating radiance level distributions of the jet. A rudimentary method for deducing intact-liquid-core length has been suggested. The technique of real-time X-ray radiography has been shown to be a viable approach to the study of the breakup processes of high-speed liquid jets.

  7. A system for EPID-based real-time treatment delivery verification during dynamic IMRT treatment.

    PubMed

    Fuangrod, Todsaporn; Woodruff, Henry C; van Uytven, Eric; McCurdy, Boyd M C; Kuncic, Zdenka; O'Connor, Daryl J; Greer, Peter B

    2013-09-01

    To design and develop a real-time electronic portal imaging device (EPID)-based delivery verification system for dynamic intensity modulated radiation therapy (IMRT) which enables detection of gross treatment delivery errors before delivery of substantial radiation to the patient. The system utilizes a comprehensive physics-based model to generate a series of predicted transit EPID image frames as a reference dataset and compares these to measured EPID frames acquired during treatment. The two datasets are using MLC aperture comparison and cumulative signal checking techniques. The system operation in real-time was simulated offline using previously acquired images for 19 IMRT patient deliveries with both frame-by-frame comparison and cumulative frame comparison. Simulated error case studies were used to demonstrate the system sensitivity and performance. The accuracy of the synchronization method was shown to agree within two control points which corresponds to approximately ∼1% of the total MU to be delivered for dynamic IMRT. The system achieved mean real-time gamma results for frame-by-frame analysis of 86.6% and 89.0% for 3%, 3 mm and 4%, 4 mm criteria, respectively, and 97.9% and 98.6% for cumulative gamma analysis. The system can detect a 10% MU error using 3%, 3 mm criteria within approximately 10 s. The EPID-based real-time delivery verification system successfully detected simulated gross errors introduced into patient plan deliveries in near real-time (within 0.1 s). A real-time radiation delivery verification system for dynamic IMRT has been demonstrated that is designed to prevent major mistreatments in modern radiation therapy.

  8. Real-time neutron imaging of gas turbines

    NASA Astrophysics Data System (ADS)

    Stewart, P. A. E.

    1987-06-01

    The current status of real-time neutron radiography imaging is briefly reviewed, and results of tests carried out on cold neutron sources are reported. In particular, attention is given to demonstrations of neutron radiography on a running gas turbine engine. The future role of real-time neutron imaging in engineering diagnostics is briefly discussed.

  9. Advances in real-time millimeter-wave imaging radiometers for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.; Galliano, Joseph A., Jr.

    1995-06-01

    Millimeter-wave imaging has advantages over conventional visible or infrared imaging for many applications because millimeter-wave signals can travel through fog, snow, dust, and clouds with much less attenuation than infrared or visible light waves. Additionally, passive imaging systems avoid many problems associated with active radar imaging systems, such as radar clutter, glint, and multi-path return. ThermoTrex Corporation previously reported on its development of a passive imaging radiometer that uses an array of frequency-scanned antennas coupled to a multichannel acousto-optic spectrum analyzer (Bragg-cell) to form visible images of a scene through the acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output from the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. An application of this system is its incorporation as part of an enhanced vision system to provide pilots with a synthetic view of a runway in fog and during other adverse weather conditions. Ongoing improvements to a 94 GHz imaging system and examples of recent images taken with this system will be presented. Additionally, the development of dielectric antennas and an electro- optic-based processor for improved system performance, and the development of an `ultra- compact' 220 GHz imaging system will be discussed.

  10. Hand-held optoacoustic probe for three-dimensional imaging of human morphology and function

    NASA Astrophysics Data System (ADS)

    Deán-Ben, X. Luís.; Razansky, Daniel

    2014-03-01

    We report on a hand-held imaging probe for real-time optoacoustic visualization of deep tissues in three dimensions. The proposed solution incorporates a two-dimensional array of ultrasonic sensors densely distributed on a spherical surface, whereas illumination is performed coaxially through a cylindrical cavity in the array. Visualization of three-dimensional tomographic data at a frame rate of 10 images per second is enabled by parallel recording of 256 time-resolved signals for each individual laser pulse along with a highly efficient GPUbased real-time reconstruction. A liquid coupling medium (water), enclosed in a transparent membrane, is used to guarantee transmission of the optoacoustically generated waves to the ultrasonic detectors. Excitation at multiple wavelengths further allows imaging spectrally distinctive tissue chromophores such as oxygenated and deoxygenated haemoglobin. The performance is showcased by video-rate tracking of deep tissue vasculature and three-dimensional measurements of blood oxygenenation in a healthy human volunteer. The flexibility provided by the hand-held hardware design, combined with the real-time operation, makes the developed platform highly usable for both small animal research and clinical imaging in multiple indications, including cancer, inflammation, skin and cardiovascular diseases, diagnostics of lymphatic system and breast

  11. Characterization of lens based photoacoustic imaging system.

    PubMed

    Francis, Kalloor Joseph; Chinni, Bhargava; Channappayya, Sumohana S; Pachamuthu, Rajalakshmi; Dogra, Vikram S; Rao, Navalgund

    2017-12-01

    Some of the challenges in translating photoacoustic (PA) imaging to clinical applications includes limited view of the target tissue, low signal to noise ratio and the high cost of developing real-time systems. Acoustic lens based PA imaging systems, also known as PA cameras are a potential alternative to conventional imaging systems in these scenarios. The 3D focusing action of lens enables real-time C-scan imaging with a 2D transducer array. In this paper, we model the underlying physics in a PA camera in the mathematical framework of an imaging system and derive a closed form expression for the point spread function (PSF). Experimental verification follows including the details on how to design and fabricate the lens inexpensively. The system PSF is evaluated over a 3D volume that can be imaged by this PA camera. Its utility is demonstrated by imaging phantom and an ex vivo human prostate tissue sample.

  12. Development of Innovative Nondestructive Evaluation Technologies for the Inspection of Cracking and Corrosion Under Coatings

    NASA Astrophysics Data System (ADS)

    Lipetzky, Kirsten G.; Novack, Michele R.; Perez, Ignacio; Davis, William R.

    2001-11-01

    Three different innovative nondestructive evaluation technologies were developed and evaluated for the ability to detect fatigue cracks and corrosion hidden under painted aluminum panels. The three technologies included real-time ultrasound imaging, thermal imaging, and near-field microwave imaging. With each of these nondestructive inspection methods, subtasks were performed in order to optimize each methodology.

  13. Ultrasound of the Thyroid Gland

    MedlinePlus

    ... the patient. Because ultrasound images are captured in real-time, they can show the structure and movement of ... has substantially grown over time Because ultrasound provides real-time images, images that are renewed continuously, it also ...

  14. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.

  15. High speed, real-time, camera bandwidth converter

    DOEpatents

    Bower, Dan E; Bloom, David A; Curry, James R

    2014-10-21

    Image data from a CMOS sensor with 10 bit resolution is reformatted in real time to allow the data to stream through communications equipment that is designed to transport data with 8 bit resolution. The incoming image data has 10 bit resolution. The communication equipment can transport image data with 8 bit resolution. Image data with 10 bit resolution is transmitted in real-time, without a frame delay, through the communication equipment by reformatting the image data.

  16. Lectures on Advanced Technologies

    DTIC Science & Technology

    1987-01-01

    we are now building, such real - time information will greatly change strategies, tactics, and weapon systems ; it will drive the development of a family...in real - time (approximately seven seconds), process a satellite image. The system was recently demonstrated at White Sands Missile Range. This system ... time and talents by coming to Annapolis and participating in our Advanced Technologies Seminar program. Arthur E. Bock Professor Emeritus Naval Systems

  17. Development of embedded real-time and high-speed vision platform

    NASA Astrophysics Data System (ADS)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  18. Internet Teleprescence by Real-Time View-Dependent Image Generation with Omnidirectional Video Camera

    NASA Astrophysics Data System (ADS)

    Morita, Shinji; Yamazawa, Kazumasa; Yokoya, Naokazu

    2003-01-01

    This paper describes a new networked telepresence system which realizes virtual tours into a visualized dynamic real world without significant time delay. Our system is realized by the following three steps: (1) video-rate omnidirectional image acquisition, (2) transportation of an omnidirectional video stream via internet, and (3) real-time view-dependent perspective image generation from the omnidirectional video stream. Our system is applicable to real-time telepresence in the situation where the real world to be seen is far from an observation site, because the time delay from the change of user"s viewing direction to the change of displayed image is small and does not depend on the actual distance between both sites. Moreover, multiple users can look around from a single viewpoint in a visualized dynamic real world in different directions at the same time. In experiments, we have proved that the proposed system is useful for internet telepresence.

  19. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuhlemann, I; Graduate School for Computing in Medicine and Life Sciences, University of Luebeck; Jauer, P

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety featuresmore » create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking applications, including image quality control and target tracking.« less

  20. Small real time detection satellites for MDA using hyperspectral imaging

    NASA Astrophysics Data System (ADS)

    Nakaya, Daiki; Yanagida, Hiroki; Shin, Satori; Ito, Tomonori; Takeuchi, Yusuke

    2017-10-01

    Hyperspectral Images are now used in the field of agriculture, cosmetics, and space exploring. Behind this fact, there is a result of efforts to contrive miniaturization and decrease in costs. This paper describes low-cost and small Hyperspectral Camera (HSC) under development and a method of utilizing it. Real Time Detection System for MDA is that government agencies put those cameras in small satellites and use them for MDA (Maritime Domain Awareness). We assume early detection of unidentified floating objects to find out disguised fishing ships and submarines.

  1. Dedicated hardware processor and corresponding system-on-chip design for real-time laser speckle imaging.

    PubMed

    Jiang, Chao; Zhang, Hongyan; Wang, Jia; Wang, Yaru; He, Heng; Liu, Rui; Zhou, Fangyuan; Deng, Jialiang; Li, Pengcheng; Luo, Qingming

    2011-11-01

    Laser speckle imaging (LSI) is a noninvasive and full-field optical imaging technique which produces two-dimensional blood flow maps of tissues from the raw laser speckle images captured by a CCD camera without scanning. We present a hardware-friendly algorithm for the real-time processing of laser speckle imaging. The algorithm is developed and optimized specifically for LSI processing in the field programmable gate array (FPGA). Based on this algorithm, we designed a dedicated hardware processor for real-time LSI in FPGA. The pipeline processing scheme and parallel computing architecture are introduced into the design of this LSI hardware processor. When the LSI hardware processor is implemented in the FPGA running at the maximum frequency of 130 MHz, up to 85 raw images with the resolution of 640×480 pixels can be processed per second. Meanwhile, we also present a system on chip (SOC) solution for LSI processing by integrating the CCD controller, memory controller, LSI hardware processor, and LCD display controller into a single FPGA chip. This SOC solution also can be used to produce an application specific integrated circuit for LSI processing.

  2. Real-time handling of existing content sources on a multi-layer display

    NASA Astrophysics Data System (ADS)

    Singh, Darryl S. K.; Shin, Jung

    2013-03-01

    A Multi-Layer Display (MLD) consists of two or more imaging planes separated by physical depth where the depth is a key component in creating a glasses-free 3D effect. Its core benefits include being viewable from multiple angles, having full panel resolution for 3D effects with no side effects of nausea or eye-strain. However, typically content must be designed for its optical configuration in foreground and background image pairs. A process was designed to give a consistent 3D effect in a 2-layer MLD from existing stereo video content in real-time. Optimizations to stereo matching algorithms that generate depth maps in real-time were specifically tailored for the optical characteristics and image processing algorithms of a MLD. The end-to-end process included improvements to the Hierarchical Belief Propagation (HBP) stereo matching algorithm, improvements to optical flow and temporal consistency. Imaging algorithms designed for the optical characteristics of a MLD provided some visual compensation for depth map inaccuracies. The result can be demonstrated in a PC environment, displayed on a 22" MLD, used in the casino slot market, with 8mm of panel seperation. Prior to this development, stereo content had not been used to achieve a depth-based 3D effect on a MLD in real-time

  3. The Maia Spectroscopy Detector System: Engineering for Integrated Pulse Capture, Low-Latency Scanning and Real-Time Processing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirkham, R.; Siddons, D.; Dunn, P.A.

    2010-06-23

    The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less

  4. Efficient scatter model for simulation of ultrasound images from computed tomography data

    NASA Astrophysics Data System (ADS)

    D'Amato, J. P.; Lo Vercio, L.; Rubi, P.; Fernandez Vera, E.; Barbuzza, R.; Del Fresno, M.; Larrabide, I.

    2015-12-01

    Background and motivation: Real-time ultrasound simulation refers to the process of computationally creating fully synthetic ultrasound images instantly. Due to the high value of specialized low cost training for healthcare professionals, there is a growing interest in the use of this technology and the development of high fidelity systems that simulate the acquisitions of echographic images. The objective is to create an efficient and reproducible simulator that can run either on notebooks or desktops using low cost devices. Materials and methods: We present an interactive ultrasound simulator based on CT data. This simulator is based on ray-casting and provides real-time interaction capabilities. The simulation of scattering that is coherent with the transducer position in real time is also introduced. Such noise is produced using a simplified model of multiplicative noise and convolution with point spread functions (PSF) tailored for this purpose. Results: The computational efficiency of scattering maps generation was revised with an improved performance. This allowed a more efficient simulation of coherent scattering in the synthetic echographic images while providing highly realistic result. We describe some quality and performance metrics to validate these results, where a performance of up to 55fps was achieved. Conclusion: The proposed technique for real-time scattering modeling provides realistic yet computationally efficient scatter distributions. The error between the original image and the simulated scattering image was compared for the proposed method and the state-of-the-art, showing negligible differences in its distribution.

  5. Real time imaging of infrared scene data generated by the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Baca, Michael J.

    1990-09-01

    A system to display images generated by the Naval Postgraduate School Infrared Search and Target Designation (a modified AN/SAR-8 Advanced Development Model) in near real time was developed using a 33 MHz NIC computer as the central controller. This computer was enhanced with a Data Translation DT2861 Frame Grabber for image processing and an interface board designed and constructed at NPS to provide synchronization between the IRSTD and Frame Grabber. Images are displayed in false color in a video raster format on a 512 by 480 pixel resolution monitor. Using FORTRAN, programs have been written to acquire, unscramble, expand and display a 3 deg sector of data. The time line for acquisition, processing and display has been analyzed and repetition periods of less than four seconds for successive screen displays have been achieved. This represents a marked improvement over previous methods necessitating slower Direct Memory Access transfers of data into the Frame Grabber. Recommendations are made for further improvements to enhance the speed and utility of images produced.

  6. Towards real-time medical diagnostics using hyperspectral imaging technology

    NASA Astrophysics Data System (ADS)

    Bjorgan, Asgeir; Randeberg, Lise L.

    2015-07-01

    Hyperspectral imaging provides non-contact, high resolution spectral images which has a substantial diagnostic potential. This can be used for e.g. diagnosis and early detection of arthritis in finger joints. Processing speed is currently a limitation for clinical use of the technique. A real-time system for analysis and visualization using GPU processing and threaded CPU processing is presented. Images showing blood oxygenation, blood volume fraction and vessel enhanced images are among the data calculated in real-time. This study shows the potential of real-time processing in this context. A combination of the processing modules will be used in detection of arthritic finger joints from hyperspectral reflectance and transmittance data.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aimsmore » to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.« less

  8. Evaluation of Optical Sonography for Real-Time Breast Imaging and Biopsy Guidance

    DTIC Science & Technology

    2002-08-01

    supported through images of target standards and subjective validation using images of human anatomy . Keywords: Diffractive Energy Imaging...real-time imaging technology that uses the principles of acoustical holography to produce unique images of the human anatomy . The ADI technology is

  9. LabVIEW application for motion tracking using USB camera

    NASA Astrophysics Data System (ADS)

    Rob, R.; Tirian, G. O.; Panoiu, M.

    2017-05-01

    The technical state of the contact line and also the additional equipment in electric rail transport is very important for realizing the repairing and maintenance of the contact line. During its functioning, the pantograph motion must stay in standard limits. Present paper proposes a LabVIEW application which is able to track in real time the motion of a laboratory pantograph and also to acquire the tracking images. An USB webcam connected to a computer acquires the desired images. The laboratory pantograph contains an automatic system which simulates the real motion. The tracking parameters are the horizontally motion (zigzag) and the vertically motion which can be studied in separate diagrams. The LabVIEW application requires appropriate tool-kits for vision development. Therefore the paper describes the subroutines that are especially programmed for real-time image acquisition and also for data processing.

  10. A modular and programmable development platform for capsule endoscopy system.

    PubMed

    Khan, Tareq Hasan; Shrestha, Ravi; Wahid, Khan A

    2014-06-01

    The state-of-the-art capsule endoscopy (CE) technology offers painless examination for the patients and the ability to examine the interior of the gastrointestinal tract by a noninvasive procedure for the gastroenterologists. In this work, a modular and flexible CE development system platform consisting of a miniature field programmable gate array (FPGA) based electronic capsule, a microcontroller based portable data recorder unit and computer software is designed and developed. Due to the flexible and reprogrammable nature of the system, various image processing and compression algorithms can be tested in the design without requiring any hardware change. The designed capsule prototype supports various imaging modes including white light imaging (WLI) and narrow band imaging (NBI), and communicates with the data recorder in full duplex fashion, which enables configuring the image size and imaging mode in real time during examination. A low complexity image compressor based on a novel color-space is implemented inside the capsule to reduce the amount of RF transmission data. The data recorder contains graphical LCD for real time image viewing and SD cards for storing image data. Data can be uploaded to a computer or Smartphone by SD card, USB interface or by wireless Bluetooth link. Computer software is developed that decompresses and reconstructs images. The fabricated capsule PCBs have a diameter of 16 mm. An ex-vivo animal testing has also been conducted to validate the results.

  11. Monotonicity based imaging method for time-domain eddy current problems

    NASA Astrophysics Data System (ADS)

    Su, Z.; Ventre, S.; Udpa, L.; Tamburrino, A.

    2017-12-01

    Eddy current imaging is an example of inverse problem in nondestructive evaluation for detecting anomalies in conducting materials. This paper introduces the concept of time constants and associated natural modes in eddy current imaging. The monotonicity of time constants is then described and applied to develop a non-iterative imaging method. The proposed imaging method has a low computational cost which makes it suitable for real-time operations. Full 3D numerical examples prove the effectiveness of the method in realistic scenarios. This paper is dedicated to Professor Guglielmo Rubinacci on the occasion of his 65th Birthday.

  12. Real-time Data Access to First Responders: A VORB application

    NASA Astrophysics Data System (ADS)

    Lu, S.; Kim, J. B.; Bryant, P.; Foley, S.; Vernon, F.; Rajasekar, A.; Meier, S.

    2006-12-01

    Getting information to first responders is not an easy task. The sensors that provide the information are diverse in formats and come from many disciplines. They are also distributed by location, transmit data at different frequencies and are managed and owned by autonomous administrative entities. Pulling such types of data in real-time, needs a very robust sensor network with reliable data transport and buffering capabilities. Moreover, the system should be extensible and scalable in numbers and sensor types. ROADNet is a real- time sensor network project at UCSD gathering diverse environmental data in real-time or near-real-time. VORB (Virtual Object Ring Buffer) is the middleware used in ROADNet offering simple, uniform and scalable real-time data management for discovering (through metadata), accessing and archiving real-time data and data streams. Recent development in VORB, a web API, has offered quick and simple real-time data integration with web applications. In this poster, we discuss one application developed as part of ROADNet. SMER (Santa Margarita Ecological Reserve) is located in interior Southern California, a region prone to catastrophic wildfires each summer and fall. To provide data during emergencies, we have applied the VORB framework to develop a web-based application for providing access to diverse sensor data including weather data, heat sensor information, and images from cameras. Wildfire fighters have access to real-time data about weather and heat conditions in the area and view pictures taken from cameras at multiple points in the Reserve to pinpoint problem areas. Moreover, they can browse archived images and sensor data from earlier times to provide a comparison framework. To show scalability of the system, we have expanded the sensor network under consideration through other areas in Southern California including sensors accessible by Los Angeles County Fire Department (LACOFD) and those available through the High Performance Wireless Research and Education Network (HPWREN). The poster will discuss the system architecture and components, the types of sensor being used and usage scenarios. The system is currently operational through the SMER web-site.

  13. The Application of Special Computing Techniques to Speed-Up Image Feature Extraction and Processing Techniques.

    DTIC Science & Technology

    1981-12-01

    ocessors has led to the possibility of implementing a large number of image processing functions in near real time . ~CC~ jnro _ j:% UNLSSFE (b-.YC ASIIAINO...to the possibility of implementing a large number of image processing functions in near " real - time ," a result which is essential to establishing a...for example, and S) rapid image handling for near real - time in- teraction by a user at a display. For example, for a large resolution image, say

  14. Mobility aid for the blind

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A project to develop an effective mobility aid for blind pedestrians which acquires consecutive images of the scenes before a moving pedestrian, which locates and identifies the pedestrian's path and potential obstacles in the path, which presents path and obstacle information to the pedestrian, and which operates in real-time is discussed. The mobility aid has three principal components: an image acquisition system, an image interpretation system, and an information presentation system. The image acquisition system consists of a miniature, solid-state TV camera which transforms the scene before the blind pedestrian into an image which can be received by the image interpretation system. The image interpretation system is implemented on a microprocessor which has been programmed to execute real-time feature extraction and scene analysis algorithms for locating and identifying the pedestrian's path and potential obstacles. Identity and location information is presented to the pedestrian by means of tactile coding and machine-generated speech.

  15. Real-Time and Post-Processed Georeferencing for Hyperpspectral Drone Remote Sensing

    NASA Astrophysics Data System (ADS)

    Oliveira, R. A.; Khoramshahi, E.; Suomalainen, J.; Hakala, T.; Viljanen, N.; Honkavaara, E.

    2018-05-01

    The use of drones and photogrammetric technologies are increasing rapidly in different applications. Currently, drone processing workflow is in most cases based on sequential image acquisition and post-processing, but there are great interests towards real-time solutions. Fast and reliable real-time drone data processing can benefit, for instance, environmental monitoring tasks in precision agriculture and in forest. Recent developments in miniaturized and low-cost inertial measurement systems and GNSS sensors, and Real-time kinematic (RTK) position data are offering new perspectives for the comprehensive remote sensing applications. The combination of these sensors and light-weight and low-cost multi- or hyperspectral frame sensors in drones provides the opportunity of creating near real-time or real-time remote sensing data of target object. We have developed a system with direct georeferencing onboard drone to be used combined with hyperspectral frame cameras in real-time remote sensing applications. The objective of this study is to evaluate the real-time georeferencing comparing with post-processing solutions. Experimental data sets were captured in agricultural and forested test sites using the system. The accuracy of onboard georeferencing data were better than 0.5 m. The results showed that the real-time remote sensing is promising and feasible in both test sites.

  16. Real-time automated thickness measurement of the in vivo human tympanic membrane using optical coherence tomography

    PubMed Central

    Hubler, Zita; Shemonski, Nathan D.; Shelton, Ryan L.; Monroy, Guillermo L.; Nolan, Ryan M.

    2015-01-01

    Background Otitis media (OM), an infection in the middle ear, is extremely common in the pediatric population. Current gold-standard methods for diagnosis include otoscopy for visualizing the surface features of the tympanic membrane (TM) and making qualitative assessments to determine middle ear content. OM typically presents as an acute infection, but can progress to chronic OM, and after numerous infections and antibiotic treatments over the course of many months, this disease is often treated by surgically inserting small tubes in the TM to relieve pressure, enable drainage, and provide aeration to the middle ear. Diagnosis and monitoring of OM is critical for successful management, but remains largely qualitative. Methods We have developed an optical coherence tomography (OCT) system for high-resolution, depth-resolved, cross-sectional imaging of the TM and middle ear content, and for the quantitative assessment of in vivo TM thickness including the presence or absence of a middle ear biofilm. A novel algorithm was developed and demonstrated for automatic, real-time, and accurate measurement of TM thickness to aid in the diagnosis and monitoring of OM and other middle ear conditions. The segmentation algorithm applies a Hough transform to the OCT image data to determine the boundaries of the TM to calculate thickness. Results The use of OCT and this segmentation algorithm is demonstrated first on layered phantoms and then during real-time acquisition of in vivo OCT from humans. For the layered phantoms, measured thicknesses varied by approximately 5 µm over time in the presence of large axial and rotational motion. In vivo data also demonstrated differences in thicknesses both spatially on a single TM, and across normal, acute, and chronic OM cases. Conclusions Real-time segmentation and thickness measurements of image data from both healthy subjects and those with acute and chronic OM demonstrate the use of OCT and this algorithm as a robust, quantitative, and accurate method for use during real-time in vivo human imaging. PMID:25694956

  17. Real-time automated thickness measurement of the in vivo human tympanic membrane using optical coherence tomography.

    PubMed

    Hubler, Zita; Shemonski, Nathan D; Shelton, Ryan L; Monroy, Guillermo L; Nolan, Ryan M; Boppart, Stephen A

    2015-02-01

    Otitis media (OM), an infection in the middle ear, is extremely common in the pediatric population. Current gold-standard methods for diagnosis include otoscopy for visualizing the surface features of the tympanic membrane (TM) and making qualitative assessments to determine middle ear content. OM typically presents as an acute infection, but can progress to chronic OM, and after numerous infections and antibiotic treatments over the course of many months, this disease is often treated by surgically inserting small tubes in the TM to relieve pressure, enable drainage, and provide aeration to the middle ear. Diagnosis and monitoring of OM is critical for successful management, but remains largely qualitative. We have developed an optical coherence tomography (OCT) system for high-resolution, depth-resolved, cross-sectional imaging of the TM and middle ear content, and for the quantitative assessment of in vivo TM thickness including the presence or absence of a middle ear biofilm. A novel algorithm was developed and demonstrated for automatic, real-time, and accurate measurement of TM thickness to aid in the diagnosis and monitoring of OM and other middle ear conditions. The segmentation algorithm applies a Hough transform to the OCT image data to determine the boundaries of the TM to calculate thickness. The use of OCT and this segmentation algorithm is demonstrated first on layered phantoms and then during real-time acquisition of in vivo OCT from humans. For the layered phantoms, measured thicknesses varied by approximately 5 µm over time in the presence of large axial and rotational motion. In vivo data also demonstrated differences in thicknesses both spatially on a single TM, and across normal, acute, and chronic OM cases. Real-time segmentation and thickness measurements of image data from both healthy subjects and those with acute and chronic OM demonstrate the use of OCT and this algorithm as a robust, quantitative, and accurate method for use during real-time in vivo human imaging.

  18. Evaluation of microsurgical tasks with OCT-guided and/or robot-assisted ophthalmic forceps

    PubMed Central

    Yu, Haoran; Shen, Jin-Hui; Shah, Rohan J.; Simaan, Nabil; Joos, Karen M.

    2015-01-01

    Real-time intraocular optical coherence tomography (OCT) visualization of tissues with surgical feedback can enhance retinal surgery. An intraocular 23-gauge B-mode forward-imaging co-planar OCT-forceps, coupling connectors and algorithms were developed to form a unique ophthalmic surgical robotic system. Approach to the surface of a phantom or goat retina by a manual or robotic-controlled forceps, with and without real-time OCT guidance, was performed. Efficiency of lifting phantom membranes was examined. Placing the co-planar OCT imaging probe internal to the surgical tool reduced instrument shadowing and permitted constant tracking. Robotic assistance together with real-time OCT feedback improved depth perception accuracy. The first-generation integrated OCT-forceps was capable of peeling membrane phantoms despite smooth tips. PMID:25780736

  19. Stimulated penetrating keratoplasty using real-time virtual intraoperative surgical optical coherence tomography

    PubMed Central

    Lee, Changho; Kim, Kyungun; Han, Seunghoon; Kim, Sehui; Lee, Jun Hoon; Kim, Hong kyun; Kim, Chulhong; Jung, Woonggyu; Kim, Jeehyun

    2014-01-01

    Abstract. An intraoperative surgical microscope is an essential tool in a neuro- or ophthalmological surgical environment. Yet, it has an inherent limitation to classify subsurface information because it only provides the surface images. To compensate for and assist in this problem, combining the surgical microscope with optical coherence tomography (OCT) has been adapted. We developed a real-time virtual intraoperative surgical OCT (VISOCT) system by adapting a spectral-domain OCT scanner with a commercial surgical microscope. Thanks to our custom-made beam splitting and image display subsystems, the OCT images and microscopic images are simultaneously visualized through an ocular lens or the eyepiece of the microscope. This improvement helps surgeons to focus on the operation without distraction to view OCT images on another separate display. Moreover, displaying the OCT live images on the eyepiece helps surgeon’s depth perception during the surgeries. Finally, we successfully processed stimulated penetrating keratoplasty in live rabbits. We believe that these technical achievements are crucial to enhance the usability of the VISOCT system in a real surgical operating condition. PMID:24604471

  20. A near-infrared fluorescence-based surgical navigation system imaging software for sentinel lymph node detection

    NASA Astrophysics Data System (ADS)

    Ye, Jinzuo; Chi, Chongwei; Zhang, Shuang; Ma, Xibo; Tian, Jie

    2014-02-01

    Sentinel lymph node (SLN) in vivo detection is vital in breast cancer surgery. A new near-infrared fluorescence-based surgical navigation system (SNS) imaging software, which has been developed by our research group, is presented for SLN detection surgery in this paper. The software is based on the fluorescence-based surgical navigation hardware system (SNHS) which has been developed in our lab, and is designed specifically for intraoperative imaging and postoperative data analysis. The surgical navigation imaging software consists of the following software modules, which mainly include the control module, the image grabbing module, the real-time display module, the data saving module and the image processing module. And some algorithms have been designed to achieve the performance of the software, for example, the image registration algorithm based on correlation matching. Some of the key features of the software include: setting the control parameters of the SNS; acquiring, display and storing the intraoperative imaging data in real-time automatically; analysis and processing of the saved image data. The developed software has been used to successfully detect the SLNs in 21 cases of breast cancer patients. In the near future, we plan to improve the software performance and it will be extensively used for clinical purpose.

  1. UTOFIA: an underwater time-of-flight image acquisition system

    NASA Astrophysics Data System (ADS)

    Driewer, Adrian; Abrosimov, Igor; Alexander, Jonathan; Benger, Marc; O'Farrell, Marion; Haugholt, Karl Henrik; Softley, Chris; Thielemann, Jens T.; Thorstensen, Jostein; Yates, Chris

    2017-10-01

    In this article the development of a newly designed Time-of-Flight (ToF) image sensor for underwater applications is described. The sensor is developed as part of the project UTOFIA (underwater time-of-flight image acquisition) funded by the EU within the Horizon 2020 framework. This project aims to develop a camera based on range gating that extends the visible range compared to conventional cameras by a factor of 2 to 3 and delivers real-time range information by means of a 3D video stream. The principle of underwater range gating as well as the concept of the image sensor are presented. Based on measurements on a test image sensor a pixel structure that suits best to the requirements has been selected. Within an extensive characterization underwater the capability of distance measurements in turbid environments is demonstrated.

  2. Optical coherence tomography-guided laser microsurgery for blood coagulation with continuous-wave laser diode.

    PubMed

    Chang, Feng-Yu; Tsai, Meng-Tsan; Wang, Zu-Yi; Chi, Chun-Kai; Lee, Cheng-Kuang; Yang, Chih-Hsun; Chan, Ming-Che; Lee, Ya-Ju

    2015-11-16

    Blood coagulation is the clotting and subsequent dissolution of the clot following repair to the damaged tissue. However, inducing blood coagulation is difficult for some patients with homeostasis dysfunction or during surgery. In this study, we proposed a method to develop an integrated system that combines optical coherence tomography (OCT) and laser microsurgery for blood coagulation. Also, an algorithm for positioning of the treatment location from OCT images was developed. With OCT scanning, 2D/3D OCT images and angiography of tissue can be obtained simultaneously, enabling to noninvasively reconstruct the morphological and microvascular structures for real-time monitoring of changes in biological tissues during laser microsurgery. Instead of high-cost pulsed lasers, continuous-wave laser diodes (CW-LDs) with the central wavelengths of 450 nm and 532 nm are used for blood coagulation, corresponding to higher absorption coefficients of oxyhemoglobin and deoxyhemoglobin. Experimental results showed that the location of laser exposure can be accurately controlled with the proposed approach of imaging-based feedback positioning. Moreover, blood coagulation can be efficiently induced by CW-LDs and the coagulation process can be monitored in real-time with OCT. This technology enables to potentially provide accurate positioning for laser microsurgery and control the laser exposure to avoid extra damage by real-time OCT imaging.

  3. Optical coherence tomography-guided laser microsurgery for blood coagulation with continuous-wave laser diode

    NASA Astrophysics Data System (ADS)

    Chang, Feng-Yu; Tsai, Meng-Tsan; Wang, Zu-Yi; Chi, Chun-Kai; Lee, Cheng-Kuang; Yang, Chih-Hsun; Chan, Ming-Che; Lee, Ya-Ju

    2015-11-01

    Blood coagulation is the clotting and subsequent dissolution of the clot following repair to the damaged tissue. However, inducing blood coagulation is difficult for some patients with homeostasis dysfunction or during surgery. In this study, we proposed a method to develop an integrated system that combines optical coherence tomography (OCT) and laser microsurgery for blood coagulation. Also, an algorithm for positioning of the treatment location from OCT images was developed. With OCT scanning, 2D/3D OCT images and angiography of tissue can be obtained simultaneously, enabling to noninvasively reconstruct the morphological and microvascular structures for real-time monitoring of changes in biological tissues during laser microsurgery. Instead of high-cost pulsed lasers, continuous-wave laser diodes (CW-LDs) with the central wavelengths of 450 nm and 532 nm are used for blood coagulation, corresponding to higher absorption coefficients of oxyhemoglobin and deoxyhemoglobin. Experimental results showed that the location of laser exposure can be accurately controlled with the proposed approach of imaging-based feedback positioning. Moreover, blood coagulation can be efficiently induced by CW-LDs and the coagulation process can be monitored in real-time with OCT. This technology enables to potentially provide accurate positioning for laser microsurgery and control the laser exposure to avoid extra damage by real-time OCT imaging.

  4. Real-time image processing for passive mmW imagery

    NASA Astrophysics Data System (ADS)

    Kozacik, Stephen; Paolini, Aaron; Bonnett, James; Harrity, Charles; Mackrides, Daniel; Dillon, Thomas E.; Martin, Richard D.; Schuetz, Christopher A.; Kelmelis, Eric; Prather, Dennis W.

    2015-05-01

    The transmission characteristics of millimeter waves (mmWs) make them suitable for many applications in defense and security, from airport preflight scanning to penetrating degraded visual environments such as brownout or heavy fog. While the cold sky provides sufficient illumination for these images to be taken passively in outdoor scenarios, this utility comes at a cost; the diffraction limit of the longer wavelengths involved leads to lower resolution imagery compared to the visible or IR regimes, and the low power levels inherent to passive imagery allow the data to be more easily degraded by noise. Recent techniques leveraging optical upconversion have shown significant promise, but are still subject to fundamental limits in resolution and signal-to-noise ratio. To address these issues we have applied techniques developed for visible and IR imagery to decrease noise and increase resolution in mmW imagery. We have developed these techniques into fieldable software, making use of GPU platforms for real-time operation of computationally complex image processing algorithms. We present data from a passive, 77 GHz, distributed aperture, video-rate imaging platform captured during field tests at full video rate. These videos demonstrate the increase in situational awareness that can be gained through applying computational techniques in real-time without needing changes in detection hardware.

  5. MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Redler, G; Bernard, D; Templeton, A

    2015-06-15

    Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing themore » scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive approach, employing multiple simulation techniques and experiments, is taken to demonstrate the feasibility of a novel scatter imaging modality for the necessary real-time image guidance.« less

  6. Restoration of moving binary images degraded owing to phosphor persistence.

    PubMed

    Cherri, A K; Awwal, A A; Karim, M A; Moon, D L

    1991-09-10

    The degraded images of dynamic objects obtained by using a phosphor-based electro-optical display are analyzed in terms of dynamic modulation transfer function (DMTF) and temporal characteristics of the display system. The direct correspondence between the DMTF and image smear is used in developing real-time techniques for the restoration of degraded images.

  7. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    PubMed

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  8. Real-time photo-magnetic imaging.

    PubMed

    Nouizi, Farouk; Erkol, Hakan; Luk, Alex; Unlu, Mehmet B; Gulsen, Gultekin

    2016-10-01

    We previously introduced a new high resolution diffuse optical imaging modality termed, photo-magnetic imaging (PMI). PMI irradiates the object under investigation with near-infrared light and monitors the variations of temperature using magnetic resonance thermometry (MRT). In this paper, we present a real-time PMI image reconstruction algorithm that uses analytic methods to solve the forward problem and assemble the Jacobian matrix much faster. The new algorithm is validated using real MRT measured temperature maps. In fact, it accelerates the reconstruction process by more than 250 times compared to a single iteration of the FEM-based algorithm, which opens the possibility for the real-time PMI.

  9. Achieving real-time capsule endoscopy (CE) video visualization through panoramic imaging

    NASA Astrophysics Data System (ADS)

    Yi, Steven; Xie, Jean; Mui, Peter; Leighton, Jonathan A.

    2013-02-01

    In this paper, we mainly present a novel and real-time capsule endoscopy (CE) video visualization concept based on panoramic imaging. Typical CE videos run about 8 hours and are manually reviewed by physicians to locate diseases such as bleedings and polyps. To date, there is no commercially available tool capable of providing stabilized and processed CE video that is easy to analyze in real time. The burden on physicians' disease finding efforts is thus big. In fact, since the CE camera sensor has a limited forward looking view and low image frame rate (typical 2 frames per second), and captures very close range imaging on the GI tract surface, it is no surprise that traditional visualization method based on tracking and registration often fails to work. This paper presents a novel concept for real-time CE video stabilization and display. Instead of directly working on traditional forward looking FOV (field of view) images, we work on panoramic images to bypass many problems facing traditional imaging modalities. Methods on panoramic image generation based on optical lens principle leading to real-time data visualization will be presented. In addition, non-rigid panoramic image registration methods will be discussed.

  10. Parallelized multi–graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy

    PubMed Central

    Tankam, Patrice; Santhanam, Anand P.; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P.

    2014-01-01

    Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing. PMID:24695868

  11. Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

    PubMed

    Tankam, Patrice; Santhanam, Anand P; Lee, Kye-Sung; Won, Jungeun; Canavesi, Cristina; Rolland, Jannick P

    2014-07-01

    Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.

  12. A High-Resolution Minimicroscope System for Wireless Real-Time Monitoring.

    PubMed

    Wang, Zongjie; Boddeda, Akash; Parker, Benjamin; Samanipour, Roya; Ghosh, Sanjoy; Menard, Frederic; Kim, Keekyoung

    2018-07-01

    Compact, cost-effective, and high-performance microscope that enables the real-time imaging of cells and lab-on-a-chip devices is highly demanded for cell biology and biomedical engineering. This paper aims to present the design and application of an inexpensive wireless minimicroscope with resolution up to 2592 × 1944 pixels and speed up to 90 f/s. The minimicroscope system was built on a commercial embedded system (Raspberry Pi). We modified a camera module and adopted an inverse dual lens system to obtain the clear field of view and appropriate magnification for tens of micrometer objects. The system was capable of capturing time-lapse images and transferring image data wirelessly. The entire system can be operated wirelessly and cordlessly in a conventional cell culturing incubator. The developed minimicroscope was used to monitor the attachment and proliferation of NIH-3T3 and HEK 293 cells inside an incubator for 50 h. In addition, the minimicroscope was used to monitor a droplet generation process in a microfluidic device. The high-quality images captured by the minimicroscope enabled us an automated analysis of experimental parameters. The successful applications prove the great potential of the developed minimicroscope for monitoring various biological samples and microfluidic devices. This paper presents the design of a high-resolution minimicroscope system that enables the wireless real-time imaging of cells inside the incubator. This system has been verified to be a useful tool to obtain high-quality images and videos for the automated quantitative analysis of biological samples and lab-on-a-chip devices in the long term.

  13. Real-time global MHD simulation of the solar wind interaction with the earth’s magnetosphere

    NASA Astrophysics Data System (ADS)

    Shimazu, H.; Kitamura, K.; Tanaka, T.; Fujita, S.; Nakamura, M. S.; Obara, T.

    2008-11-01

    We have developed a real-time global MHD (magnetohydrodynamics) simulation of the solar wind interaction with the earth’s magnetosphere. By adopting the real-time solar wind parameters and interplanetary magnetic field (IMF) observed routinely by the ACE (Advanced Composition Explorer) spacecraft, responses of the magnetosphere are calculated with MHD code. The simulation is carried out routinely on the super computer system at National Institute of Information and Communications Technology (NICT), Japan. The visualized images of the magnetic field lines around the earth, pressure distribution on the meridian plane, and the conductivity of the polar ionosphere, can be referred to on the web site (http://www2.nict.go.jp/y/y223/simulation/realtime/). The results show that various magnetospheric activities are almost reproduced qualitatively. They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere. From the viewpoint of space weather, the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere. To evaluate the simulation results, we compare the AE indices derived from the simulation and observations. The simulation and observation agree well for quiet days and isolated substorm cases in general.

  14. On-loom, real-time, noncontact detection of fabric defects by ultrasonic imaging.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, H. T.

    1998-09-08

    A noncontact, on-loom ultrasonic inspection technique was developed for real-time 100% defect inspection of fabrics. A prototype was built and tested successfully on loom. The system is compact, rugged, low cost, requires minimal maintenance, is not sensitive to fabric color and vibration, and can easily be adapted to current loom configurations. Moreover, it can detect defects in both the pick and warp directions. The system is capable of determining the size, location, and orientation of each defect. To further improve the system, air-coupled transducers with higher efficiency and sensitivity need to be developed. Advanced detection algorithms also need to bemore » developed for better classification and categorization of defects in real-time.« less

  15. A Voluntary Breath-Hold Treatment Technique for the Left Breast With Unfavorable Cardiac Anatomy Using Surface Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gierga, David P., E-mail: dgierga@partners.org; Harvard Medical School, Boston, Massachusetts; Turcotte, Julie C.

    2012-12-01

    Purpose: Breath-hold (BH) treatments can be used to reduce cardiac dose for patients with left-sided breast cancer and unfavorable cardiac anatomy. A surface imaging technique was developed for accurate patient setup and reproducible real-time BH positioning. Methods and Materials: Three-dimensional surface images were obtained for 20 patients. Surface imaging was used to correct the daily setup for each patient. Initial setup data were recorded for 443 fractions and were analyzed to assess random and systematic errors. Real time monitoring was used to verify surface placement during BH. The radiation beam was not turned on if the BH position difference wasmore » greater than 5 mm. Real-time surface data were analyzed for 2398 BHs and 363 treatment fractions. The mean and maximum differences were calculated. The percentage of BHs greater than tolerance was calculated. Results: The mean shifts for initial patient setup were 2.0 mm, 1.2 mm, and 0.3 mm in the vertical, longitudinal, and lateral directions, respectively. The mean 3-dimensional vector shift was 7.8 mm. Random and systematic errors were less than 4 mm. Real-time surface monitoring data indicated that 22% of the BHs were outside the 5-mm tolerance (range, 7%-41%), and there was a correlation with breast volume. The mean difference between the treated and reference BH positions was 2 mm in each direction. For out-of-tolerance BHs, the average difference in the BH position was 6.3 mm, and the average maximum difference was 8.8 mm. Conclusions: Daily real-time surface imaging ensures accurate and reproducible positioning for BH treatment of left-sided breast cancer patients with unfavorable cardiac anatomy.« less

  16. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach.

    PubMed

    Salamon, Johannes; Hofmann, Martin; Jung, Caroline; Kaul, Michael Gerhard; Werner, Franziska; Them, Kolja; Reimer, Rudolph; Nielsen, Peter; Vom Scheidt, Annika; Adam, Gerhard; Knopp, Tobias; Ittrich, Harald

    2016-01-01

    In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI) / magnetic resonance imaging (MRI) road map approach and an MPI-guided approach using a blood pool tracer. A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4) was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography. Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide. 4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions.

  17. The Smartphone Brain Scanner: A Portable Real-Time Neuroimaging System

    PubMed Central

    Stopczynski, Arkadiusz; Stahlhut, Carsten; Larsen, Jakob Eg; Petersen, Michael Kai; Hansen, Lars Kai

    2014-01-01

    Combining low-cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. Here we present the technical details and validation of a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction. The system – Smartphone Brain Scanner – combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully portable system for real-time 3D EEG imaging. We discuss the benefits and challenges, including technical limitations as well as details of real-time reconstruction of 3D images of brain activity. We present examples of brain activity captured in a simple experiment involving imagined finger tapping, which shows that the acquired signal in a relevant brain region is similar to that obtained with standard EEG lab equipment. Although the quality of the signal in a mobile solution using an off-the-shelf consumer neuroheadset is lower than the signal obtained using high-density standard EEG equipment, we propose mobile application development may offset the disadvantages and provide completely new opportunities for neuroimaging in natural settings. PMID:24505263

  18. Application test of a Detection Method for the Enclosed Turbine Runner Chamber

    NASA Astrophysics Data System (ADS)

    Liu, Yunlong; Shen, Dingjie; Xie, Yi; Yang, Xiangwei; Long, Yi; Li, Wenbo

    2017-06-01

    At present, for the existing problems of the testing methods for the key hidden metal components of the turbine runner chamber, such as the poor reliability, the inaccurate locating and the larger detection blind spots of the detection device, under the downtime without opening the cover of the hydropower turbine runner chamber, an automatic detection method based on real-time image acquisition and simulation comparison techniques was proposed. By using the permanent magnet wheel, the magnetic crawler which carry the real-time image acquisition device, could complete the crawling work on the inner surface of the enclosed chamber. Then the image acquisition device completed the real-time collection of the scene image of the enclosed chamber. According to the obtained location by using the positioning auxiliary device, the position of the real-time detection image in a virtual 3D model was calibrated. Through comparing of the real-time detection images and the computer simulation images, the defects or foreign matter fall into could be accurately positioning, so as to repair and clean up conveniently.

  19. Fluorescence imaging of angiogenesis in green fluorescent protein-expressing tumors

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Baranov, Eugene; Jiang, Ping; Li, Xiao-Ming; Wang, Jin W.; Li, Lingna; Yagi, Shigeo; Moossa, A. R.; Hoffman, Robert M.

    2002-05-01

    The development of therapeutics for the control of tumor angiogenesis requires a simple, reliable in vivo assay for tumor-induced vascularization. For this purpose, we have adapted the orthotopic implantation model of angiogenesis by using human and rodent tumors genetically tagged with Aequorea victoria green fluorescent protein (GFP) for grafting into nude mice. Genetically-fluorescent tumors can be readily imaged in vivo. The non-luminous induced capillaries are clearly visible against the bright tumor fluorescence examined either intravitally or by whole-body luminance in real time. Fluorescence shadowing replaces the laborious histological techniques for determining blood vessel density. High-level GFP-expressing tumor cell lines made it possible to acquire the high-resolution real-time fluorescent optical images of angiogenesis in both primary tumors and their metastatic lesions in various human and rodent tumor models by means of a light-based imaging system. Intravital images of angiogenesis onset and development were acquired and quantified from a GFP- expressing orthotopically-growing human prostate tumor over a 19-day period. Whole-body optical imaging visualized vessel density increasing linearly over a 20-week period in orthotopically-growing, GFP-expressing human breast tumor MDA-MB-435. Vessels in an orthotopically-growing GFP- expressing Lewis lung carcinoma tumor were visualized through the chest wall via a reversible skin flap. These clinically-relevant angiogenesis mouse models can be used for real-time in vivo evaluation of agents inhibiting or promoting tumor angiogenesis in physiological micro- environments.

  20. A surgical confocal microlaparoscope for real-time optical biopsies

    NASA Astrophysics Data System (ADS)

    Tanbakuchi, Anthony Amir

    The first real-time fluorescence confocal microlaparoscope has been developed that provides instant in vivo cellular images, comparable to those provided by histology, through a nondestructive procedure. The device includes an integrated contrast agent delivery mechanism and a computerized depth scan system. The instrument uses a fiber bundle to relay the image plane of a slit-scan confocal microlaparoscope into tissue. The confocal laparoscope was used to image the ovaries of twenty-one patients in vivo using fluorescein sodium and acridine orange as the fluorescent contrast agents. The results indicate that the device is safe and functions as designed. A Monte Carlo model was developed to characterize the system performance in a scattering media representative of human tissues. The results indicate that a slit aperture has limited ability to image below the surface of tissue. In contrast, the results show that multi-pinhole apertures such as a Nipkow disk or a linear pinhole array can achieve nearly the same depth performance as a single pinhole aperture. The model was used to determine the optimal aperture spacing for the multi-pinhole apertures. The confocal microlaparoscope represents a new type of in vivo imaging device. With its ability to image cellular details in real time, it has the potential to aid in the early diagnosis of cancer. Initially, the device may be used to locate unusual regions for guided biopsies. In the long term, the device may be able to supplant traditional biopsies and allow the surgeon to identify early stage cancer in vivo.

  1. Design of area array CCD image acquisition and display system based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhang, Ning; Li, Tianting; Pan, Yue; Dai, Yuming

    2014-09-01

    With the development of science and technology, CCD(Charge-coupled Device) has been widely applied in various fields and plays an important role in the modern sensing system, therefore researching a real-time image acquisition and display plan based on CCD device has great significance. This paper introduces an image data acquisition and display system of area array CCD based on FPGA. Several key technical challenges and problems of the system have also been analyzed and followed solutions put forward .The FPGA works as the core processing unit in the system that controls the integral time sequence .The ICX285AL area array CCD image sensor produced by SONY Corporation has been used in the system. The FPGA works to complete the driver of the area array CCD, then analog front end (AFE) processes the signal of the CCD image, including amplification, filtering, noise elimination, CDS correlation double sampling, etc. AD9945 produced by ADI Corporation to convert analog signal to digital signal. Developed Camera Link high-speed data transmission circuit, and completed the PC-end software design of the image acquisition, and realized the real-time display of images. The result through practical testing indicates that the system in the image acquisition and control is stable and reliable, and the indicators meet the actual project requirements.

  2. Integrated Raman spectroscopy and trimodal wide-field imaging techniques for real-time in vivo tissue Raman measurements at endoscopy.

    PubMed

    Huang, Zhiwei; Teh, Seng Khoon; Zheng, Wei; Mo, Jianhua; Lin, Kan; Shao, Xiaozhuo; Ho, Khek Yu; Teh, Ming; Yeoh, Khay Guan

    2009-03-15

    We report an integrated Raman spectroscopy and trimodal (white-light reflectance, autofluorescence, and narrow-band) imaging techniques for real-time in vivo tissue Raman measurements at endoscopy. A special 1.8 mm endoscopic Raman probe with filtering modules is developed, permitting effective elimination of interference of fluorescence background and silica Raman in fibers while maximizing tissue Raman collections. We demonstrate that high-quality in vivo Raman spectra of upper gastrointestinal tract can be acquired within 1 s or subseconds under the guidance of wide-field endoscopic imaging modalities, greatly facilitating the adoption of Raman spectroscopy into clinical research and practice during routine endoscopic inspections.

  3. Laser-Generated Ultrasonic Source for a Real-Time Dry-Contact Imaging System

    NASA Astrophysics Data System (ADS)

    Petculescu, G.; Zhou, Y.; Komsky, I.; Krishnaswamy, S.

    2006-03-01

    A laser-generated ultrasonic source, to be used with a real-time imaging device, was developed. The ultrasound is generated in the thermoelastic regime, in a composite layer composed of absorbing particles (carbon) and silicone rubber. The composite layer plays three roles: of absorption, constriction and dry-coupling. The central frequency of the generated pulse was controlled by varying the absorption depth of the generation layer. The maximum peak frequency obtained was 4MHz. When additional constriction was provided to the composite layer, the amplitude of the generated signal increased further, due to the large thermal expansion coefficient of the silicone. Images using the laser-generated ultrasonic source were taken.

  4. Label-free real-time imaging of myelination in the Xenopus laevis tadpole by in vivo stimulated Raman scattering microscopy

    NASA Astrophysics Data System (ADS)

    Hu, Chun-Rui; Zhang, Delong; Slipchenko, Mikhail N.; Cheng, Ji-Xin; Hu, Bing

    2014-08-01

    The myelin sheath plays an important role as the axon in the functioning of the neural system, and myelin degradation is a hallmark pathology of multiple sclerosis and spinal cord injury. Electron microscopy, fluorescent microscopy, and magnetic resonance imaging are three major techniques used for myelin visualization. However, microscopic observation of myelin in living organisms remains a challenge. Using a newly developed stimulated Raman scattering microscopy approach, we report noninvasive, label-free, real-time in vivo imaging of myelination by a single-Schwann cell, maturation of a single node of Ranvier, and myelin degradation in the transparent body of the Xenopus laevis tadpole.

  5. Real-time computer treatment of THz passive device images with the high image quality

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2012-06-01

    We demonstrate real-time computer code improving significantly the quality of images captured by the passive THz imaging system. The code is not only designed for a THz passive device: it can be applied to any kind of such devices and active THz imaging systems as well. We applied our code for computer processing of images captured by four passive THz imaging devices manufactured by different companies. It should be stressed that computer processing of images produced by different companies requires using the different spatial filters usually. The performance of current version of the computer code is greater than one image per second for a THz image having more than 5000 pixels and 24 bit number representation. Processing of THz single image produces about 20 images simultaneously corresponding to various spatial filters. The computer code allows increasing the number of pixels for processed images without noticeable reduction of image quality. The performance of the computer code can be increased many times using parallel algorithms for processing the image. We develop original spatial filters which allow one to see objects with sizes less than 2 cm. The imagery is produced by passive THz imaging devices which captured the images of objects hidden under opaque clothes. For images with high noise we develop an approach which results in suppression of the noise after using the computer processing and we obtain the good quality image. With the aim of illustrating the efficiency of the developed approach we demonstrate the detection of the liquid explosive, ordinary explosive, knife, pistol, metal plate, CD, ceramics, chocolate and other objects hidden under opaque clothes. The results demonstrate the high efficiency of our approach for the detection of hidden objects and they are a very promising solution for the security problem.

  6. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  7. Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping

    NASA Astrophysics Data System (ADS)

    B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel

    2015-07-01

    The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging.

  8. Imaging the Directed Transport of Single Engineered RNA Transcripts in Real-Time Using Ratiometric Bimolecular Beacons

    PubMed Central

    Zhang, Xuemei; Zajac, Allison L.; Huang, Lingyan; Behlke, Mark A.; Tsourkas, Andrew

    2014-01-01

    The relationship between RNA expression and cell function can often be difficult to decipher due to the presence of both temporal and sub-cellular processing of RNA. These intricacies of RNA regulation can often be overlooked when only acquiring global measurements of RNA expression. This has led to development of several tools that allow for the real-time imaging of individual engineered RNA transcripts in living cells. Here, we describe a new technique that utilizes an oligonucleotide-based probe, ratiometric bimolecular beacon (RBMB), to image RNA transcripts that were engineered to contain 96-tandem repeats of the RBMB target sequence in the 3′-untranslated region. Binding of RBMBs to the target RNA resulted in discrete bright fluorescent spots, representing individual transcripts, that could be imaged in real-time. Since RBMBs are a synthetic probe, the use of photostable, bright, and red-shifted fluorophores led to a high signal-to-background. RNA motion was readily characterized by both mean squared displacement and moment scaling spectrum analyses. These analyses revealed clear examples of directed, Brownian, and subdiffusive movements. PMID:24454933

  9. [Development of fluorescent probes for bone imaging in vivo ~Fluorescent probes for intravital imaging of osteoclast activity~.

    PubMed

    Minoshima, Masafumi; Kikuchi, Kazuya

    Fluorescent molecules are widely used as a tool to directly visualize target biomolecules in vivo. Fluorescent probes have the advantage that desired function can be rendered based on rational design. For bone-imaging fluorescent probes in vivo, they should be delivered to bone tissue upon administration. Recently, a fluorescent probe for detecting osteoclast activity was developed. The fluorescent probe has acid-sensitive fluorescence property, specific delivery to bone tissue, and durability against laser irradiation, which enabled real-time intravital imaging of bone-resorbing osteoclasts for a long period of time.

  10. The pivotal role of multimodality reporter sensors in drug discovery: from cell based assays to real time molecular imaging.

    PubMed

    Ray, Pritha

    2011-04-01

    Development and marketing of new drugs require stringent validation that are expensive and time consuming. Non-invasive multimodality molecular imaging using reporter genes holds great potential to expedite these processes at reduced cost. New generations of smarter molecular imaging strategies such as Split reporter, Bioluminescence resonance energy transfer, Multimodality fusion reporter technologies will further assist to streamline and shorten the drug discovery and developmental process. This review illustrates the importance and potential of molecular imaging using multimodality reporter genes in drug development at preclinical phases.

  11. Canine spontaneous glioma: A translational model system for convection-enhanced delivery

    PubMed Central

    Dickinson, Peter J.; LeCouteur, Richard A.; Higgins, Robert J.; Bringas, John R.; Larson, Richard F.; Yamashita, Yoji; Krauze, Michal T.; Forsayeth, John; Noble, Charles O.; Drummond, Daryl C.; Kirpotin, Dmitri B.; Park, John W.; Berger, Mitchel S.; Bankiewicz, Krystof S.

    2010-01-01

    Canine spontaneous intracranial tumors bear striking similarities to their human tumor counterparts and have the potential to provide a large animal model system for more realistic validation of novel therapies typically developed in small rodent models. We used spontaneously occurring canine gliomas to investigate the use of convection-enhanced delivery (CED) of liposomal nanoparticles, containing topoisomerase inhibitor CPT-11. To facilitate visualization of intratumoral infusions by real-time magnetic resonance imaging (MRI), we included identically formulated liposomes loaded with Gadoteridol. Real-time MRI defined distribution of infusate within both tumor and normal brain tissues. The most important limiting factor for volume of distribution within tumor tissue was the leakage of infusate into ventricular or subarachnoid spaces. Decreased tumor volume, tumor necrosis, and modulation of tumor phenotype correlated with volume of distribution of infusate (Vd), infusion location, and leakage as determined by real-time MRI and histopathology. This study demonstrates the potential for canine spontaneous gliomas as a model system for the validation and development of novel therapeutic strategies for human brain tumors. Data obtained from infusions monitored in real time in a large, spontaneous tumor may provide information, allowing more accurate prediction and optimization of infusion parameters. Variability in Vd between tumors strongly suggests that real-time imaging should be an essential component of CED therapeutic trials to allow minimization of inappropriate infusions and accurate assessment of clinical outcomes. PMID:20488958

  12. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm.

    PubMed

    Tehrani, Joubin Nasehi; O'Brien, Ricky T; Poulsen, Per Rugaard; Keall, Paul

    2013-12-07

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  13. Real-time estimation of prostate tumor rotation and translation with a kV imaging system based on an iterative closest point algorithm

    NASA Astrophysics Data System (ADS)

    Nasehi Tehrani, Joubin; O'Brien, Ricky T.; Rugaard Poulsen, Per; Keall, Paul

    2013-12-01

    Previous studies have shown that during cancer radiotherapy a small translation or rotation of the tumor can lead to errors in dose delivery. Current best practice in radiotherapy accounts for tumor translations, but is unable to address rotation due to a lack of a reliable real-time estimate. We have developed a method based on the iterative closest point (ICP) algorithm that can compute rotation from kilovoltage x-ray images acquired during radiation treatment delivery. A total of 11 748 kilovoltage (kV) images acquired from ten patients (one fraction for each patient) were used to evaluate our tumor rotation algorithm. For each kV image, the three dimensional coordinates of three fiducial markers inside the prostate were calculated. The three dimensional coordinates were used as input to the ICP algorithm to calculate the real-time tumor rotation and translation around three axes. The results show that the root mean square error was improved for real-time calculation of tumor displacement from a mean of 0.97 mm with the stand alone translation to a mean of 0.16 mm by adding real-time rotation and translation displacement with the ICP algorithm. The standard deviation (SD) of rotation for the ten patients was 2.3°, 0.89° and 0.72° for rotation around the right-left (RL), anterior-posterior (AP) and superior-inferior (SI) directions respectively. The correlation between all six degrees of freedom showed that the highest correlation belonged to the AP and SI translation with a correlation of 0.67. The second highest correlation in our study was between the rotation around RL and rotation around AP, with a correlation of -0.33. Our real-time algorithm for calculation of rotation also confirms previous studies that have shown the maximum SD belongs to AP translation and rotation around RL. ICP is a reliable and fast algorithm for estimating real-time tumor rotation which could create a pathway to investigational clinical treatment studies requiring real-time measurement and adaptation to tumor rotation.

  14. Real time quantitative imaging for semiconductor crystal growth, control and characterization

    NASA Technical Reports Server (NTRS)

    Wargo, Michael J.

    1991-01-01

    A quantitative real time image processing system has been developed which can be software-reconfigured for semiconductor processing and characterization tasks. In thermal imager mode, 2D temperature distributions of semiconductor melt surfaces (900-1600 C) can be obtained with temperature and spatial resolutions better than 0.5 C and 0.5 mm, respectively, as demonstrated by analysis of melt surface thermal distributions. Temporal and spatial image processing techniques and multitasking computational capabilities convert such thermal imaging into a multimode sensor for crystal growth control. A second configuration of the image processing engine in conjunction with bright and dark field transmission optics is used to nonintrusively determine the microdistribution of free charge carriers and submicron sized crystalline defects in semiconductors. The IR absorption characteristics of wafers are determined with 10-micron spatial resolution and, after calibration, are converted into charge carrier density.

  15. A Sensitive Measurement for Estimating Impressions of Image-Contents

    NASA Astrophysics Data System (ADS)

    Sato, Mie; Matouge, Shingo; Mori, Toshifumi; Suzuki, Noboru; Kasuga, Masao

    We have investigated Kansei Content that appeals maker's intention to viewer's kansei. An SD method is a very good way to evaluate subjective impression of image-contents. However, because the SD method is performed after subjects view the image-contents, it is difficult to examine impression of detailed scenes of the image-contents in real time. To measure viewer's impression of the image-contents in real time, we have developed a Taikan sensor. With the Taikan sensor, we investigate relations among the image-contents, the grip strength and the body temperature. We also explore the interface of the Taikan sensor to use it easily. In our experiment, a horror movie is used that largely affects emotion of the subjects. Our results show that there is a possibility that the grip strength increases when the subjects view a strained scene and that it is easy to use the Taikan sensor without its circle base that is originally installed.

  16. Induction and imaging of photothrombotic stroke in conscious and freely moving rats

    NASA Astrophysics Data System (ADS)

    Lu, Hongyang; Li, Yao; Yuan, Lu; Li, Hangdao; Lu, Xiaodan; Tong, Shanbao

    2014-09-01

    In experimental stroke research, anesthesia is common and serves as a major reason for translational failure. Real-time cerebral blood flow (CBF) monitoring during stroke onset can provide important information for the prediction of brain injury; however, this is difficult to achieve in clinical practice due to various technical problems. We created a photothrombotic focal ischemic stroke model utilizing our self-developed miniature headstage in conscious and freely moving rats. In this model, a high spatiotemporal resolution imager using laser speckle contrast imaging technology was integrated to acquire real-time two-dimensional CBF information during thrombosis. The feasibility, stability, and reliability of the system were tested in terms of CBF, behavior, and T2-weighted magnetic resonance imaging (MRI) findings. After completion of occlusion, the CBF in the targeted cortex of the stroke group was reduced to 16±9% of the baseline value. The mean infarct volume measured by MRI 24 h postmodeling was 77±11 mm3 and correlated well with CBF (R2=0.74). This rodent model of focal cerebral ischemia and real-time blood flow imaging opens the possibility of performing various fundamental and translational studies on stroke without the influence of anesthetics.

  17. GRAPE: a graphical pipeline environment for image analysis in adaptive magnetic resonance imaging.

    PubMed

    Gabr, Refaat E; Tefera, Getaneh B; Allen, William J; Pednekar, Amol S; Narayana, Ponnada A

    2017-03-01

    We present a platform, GRAphical Pipeline Environment (GRAPE), to facilitate the development of patient-adaptive magnetic resonance imaging (MRI) protocols. GRAPE is an open-source project implemented in the Qt C++ framework to enable graphical creation, execution, and debugging of real-time image analysis algorithms integrated with the MRI scanner. The platform provides the tools and infrastructure to design new algorithms, and build and execute an array of image analysis routines, and provides a mechanism to include existing analysis libraries, all within a graphical environment. The application of GRAPE is demonstrated in multiple MRI applications, and the software is described in detail for both the user and the developer. GRAPE was successfully used to implement and execute three applications in MRI of the brain, performed on a 3.0-T MRI scanner: (i) a multi-parametric pipeline for segmenting the brain tissue and detecting lesions in multiple sclerosis (MS), (ii) patient-specific optimization of the 3D fluid-attenuated inversion recovery MRI scan parameters to enhance the contrast of brain lesions in MS, and (iii) an algebraic image method for combining two MR images for improved lesion contrast. GRAPE allows graphical development and execution of image analysis algorithms for inline, real-time, and adaptive MRI applications.

  18. Development and evaluation of a novel, real time mobile telesonography system in management of patients with abdominal trauma: study protocol.

    PubMed

    Ogedegbe, Chinwe; Morchel, Herman; Hazelwood, Vikki; Chaplin, William F; Feldman, Joseph

    2012-12-18

    Despite the use of e-FAST in management of patients with abdominal trauma, its utility in prehospital setting is not widely adopted. The goal of this study is to develop a novel portable telesonography (TS) system and evaluate the comparability of the quality of images obtained via this system among healthy volunteers who undergo e-FAST abdominal examination in a moving ambulance and at the ED. We hypothesize that: (1) real-time ultrasound images of acute trauma patients in the pre-hospital setting can be obtained and transmitted to the ED via the novel TS system; and (2) Ultrasound images transmitted to the hospital from the real-time TS system will be comparable in quality to those obtained in the ED. Study participants are three healthy volunteers (one each with normal, overweight and obese BMI category). The ultrasound images will be obtained by two ultrasound-trained physicians The TS is a portable sonogram (by Sonosite) interfaced with a portable broadcast unit (by Live-U). Two UTPs will conduct e-FAST examinations on healthy volunteers in moving ambulances and transmit the images via cellular network to the hospital server, where they are stored. Upon arrival in the ED, the same UTPs will obtain another set of images from the volunteers, which are then compared to those obtained in the moving ambulances by another set of blinded UTPs (evaluators) using a validated image quality scale, the Questionnaire for User Interaction Satisfaction (QUIS). Findings from this study will provide needed data on the validity of the novel TS in transmitting live images from moving ambulances to images obtained in the ED thus providing opportunity to facilitate medical care of a patient located in a remote or austere setting.

  19. Design and characterization of a handheld multimodal imaging device for the assessment of oral epithelial lesions

    NASA Astrophysics Data System (ADS)

    Higgins, Laura M.; Pierce, Mark C.

    2014-08-01

    A compact handpiece combining high resolution fluorescence (HRF) imaging with optical coherence tomography (OCT) was developed to provide real-time assessment of oral lesions. This multimodal imaging device simultaneously captures coregistered en face images with subcellular detail alongside cross-sectional images of tissue microstructure. The HRF imaging acquires a 712×594 μm2 field-of-view at the sample with a spatial resolution of 3.5 μm. The OCT images were acquired to a depth of 1.5 mm with axial and lateral resolutions of 9.3 and 8.0 μm, respectively. HRF and OCT images are simultaneously displayed at 25 fps. The handheld device was used to image a healthy volunteer, demonstrating the potential for in vivo assessment of the epithelial surface for dysplastic and neoplastic changes at the cellular level, while simultaneously evaluating submucosal involvement. We anticipate potential applications in real-time assessment of oral lesions for improved surveillance and surgical guidance.

  20. Three-dimensional optical coherence tomography of the embryonic murine cardiovascular system

    NASA Astrophysics Data System (ADS)

    Luo, Wei; Marks, Daniel L.; Ralston, Tyler S.; Boppart, Stephen A.

    2006-03-01

    Optical coherence tomography (OCT) is an emerging high-resolution real-time biomedical imaging technology that has potential as a novel investigational tool in developmental biology and functional genomics. In this study, murine embryos and embryonic hearts are visualized with an OCT system capable of 2-µm axial and 15-µm lateral resolution and with real-time acquisition rates. We present, to our knowledge, the first sets of high-resolution 2- and 3-D OCT images that reveal the internal structures of the mammalian (murine) embryo (E10.5) and embryonic (E14.5 and E17.5) cardiovascular system. Strong correlations are observed between OCT images and corresponding hematoxylin- and eosin-stained histological sections. Real-time in vivo embryonic (E10.5) heart activity is captured by spectral-domain optical coherence tomography, processed, and displayed at a continuous rate of five frames per second. With the ability to obtain not only high-resolution anatomical data but also functional information during cardiovascular development, the OCT technology has the potential to visualize and quantify changes in murine development and in congenital and induced heart disease, as well as enable a wide range of basic in vitro and in vivo research studies in functional genomics.

  1. Real-time DNA Amplification and Detection System Based on a CMOS Image Sensor.

    PubMed

    Wang, Tiantian; Devadhasan, Jasmine Pramila; Lee, Do Young; Kim, Sanghyo

    2016-01-01

    In the present study, we developed a polypropylene well-integrated complementary metal oxide semiconductor (CMOS) platform to perform the loop mediated isothermal amplification (LAMP) technique for real-time DNA amplification and detection simultaneously. An amplification-coupled detection system directly measures the photon number changes based on the generation of magnesium pyrophosphate and color changes. The photon number decreases during the amplification process. The CMOS image sensor observes the photons and converts into digital units with the aid of an analog-to-digital converter (ADC). In addition, UV-spectral studies, optical color intensity detection, pH analysis, and electrophoresis detection were carried out to prove the efficiency of the CMOS sensor based the LAMP system. Moreover, Clostridium perfringens was utilized as proof-of-concept detection for the new system. We anticipate that this CMOS image sensor-based LAMP method will enable the creation of cost-effective, label-free, optical, real-time and portable molecular diagnostic devices.

  2. Design of point-of-care (POC) microfluidic medical diagnostic devices

    NASA Astrophysics Data System (ADS)

    Leary, James F.

    2018-02-01

    Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.

  3. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring.

    PubMed

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J

    2012-11-21

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(t(i)) and the projected marker positions p(x(p), y(p); t(i)) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(x(p), y(p); t(i)) - P(θ(i)) · (aR(t(i)) + bR(t(i) - τ) + c)‖(2) with the projection operator P(θ(i)). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  4. Experimental investigation of a general real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring

    NASA Astrophysics Data System (ADS)

    Cho, Byungchul; Poulsen, Per; Ruan, Dan; Sawant, Amit; Keall, Paul J.

    2012-11-01

    The goal of this work was to experimentally quantify the geometric accuracy of a novel real-time 3D target localization method using sequential kV imaging combined with respiratory monitoring for clinically realistic arc and static field treatment delivery and target motion conditions. A general method for real-time target localization using kV imaging and respiratory monitoring was developed. Each dimension of internal target motion T(x, y, z; t) was estimated from the external respiratory signal R(t) through the correlation between R(ti) and the projected marker positions p(xp, yp; ti) on kV images by a state-augmented linear model: T(x, y, z; t) = aR(t) + bR(t - τ) + c. The model parameters, a, b, c, were determined by minimizing the squared fitting error ∑‖p(xp, yp; ti) - P(θi) · (aR(ti) + bR(ti - τ) + c)‖2 with the projection operator P(θi). The model parameters were first initialized based on acquired kV arc images prior to MV beam delivery. This method was implemented on a trilogy linear accelerator consisting of an OBI x-ray imager (operating at 1 Hz) and real-time position monitoring (RPM) system (30 Hz). Arc and static field plans were delivered to a moving phantom programmed with measured lung tumour motion from ten patients. During delivery, the localization method determined the target position and the beam was adjusted in real time via dynamic multileaf collimator (DMLC) adaptation. The beam-target alignment error was quantified by segmenting the beam aperture and a phantom-embedded fiducial marker on MV images and analysing their relative position. With the localization method, the root-mean-squared errors of the ten lung tumour traces ranged from 0.7-1.3 mm and 0.8-1.4 mm during the single arc and five-field static beam delivery, respectively. Without the localization method, these errors ranged from 3.1-7.3 mm. In summary, a general method for real-time target localization using kV imaging and respiratory monitoring has been experimentally investigated for arc and static field delivery. The average beam-target error was 1 mm.

  5. Real-time imaging for cerebral ischemia in rats using the multi-wavelength handheld photoacoustic system

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Hang; Xu, Yu; Chan, Kim Chuan; Mehta, Kalpesh; Thakor, Nitish; Liao, Lun-De

    2017-02-01

    Stroke is the second leading cause of death worldwide. Rapid and precise diagnosis is essential to expedite clinical decision and improve functional outcomes in stroke patients; therefore, real-time imaging plays an important role to provide crucial information for post-stroke recovery analysis. In this study, based on the multi-wavelength laser and 18.5 MHz array-based ultrasound platform, a real-time handheld photoacoustic (PA) system was developed to evaluate cerebrovascular functions pre- and post-stroke in rats. Using this system, hemodynamic information such as cerebral blood volume (CBV) can be acquired for assessment. One rat stroke model (i.e., photothrombotic ischemia (PTI)) was employed for evaluating the effect of local ischemia. For achieving better intrinsic PA contrast, Vantage and COMSOL simulations were applied to optimize the light delivery (e.g., interval between two arms) from customized fiber bundle, while phantom experiment was conducted to evaluate the imaging performance of this system. Results of phantom experiment showed that hairs ( 150 μm diameter) and pencil lead (500 μm diameter) can be imaged clearly. On the other hand, results of in vivo experiments also demonstrated that stroke symptoms can be observed in PTI model poststroke. In the near future, with the help of PA specific contrast agent, the system would be able to achieve blood-brain barrier leakage imaging post-stroke. Overall, the real-time handheld PA system holds great potential in disease models involving impairments in cerebrovascular functions.

  6. Computer-assisted surgical planning and automation of laser delivery systems

    NASA Astrophysics Data System (ADS)

    Zamorano, Lucia J.; Dujovny, Manuel; Dong, Ada; Kadi, A. Majeed

    1991-05-01

    This paper describes a 'real time' surgical treatment planning interactive workstation, utilizing multimodality imaging (computer tomography, magnetic resonance imaging, digital angiography) that has been developed to provide the neurosurgeon with two-dimensional multiplanar and three-dimensional 'display' of a patient's lesion.

  7. Fast parallel approach for 2-D DHT-based real-valued discrete Gabor transform.

    PubMed

    Tao, Liang; Kwan, Hon Keung

    2009-12-01

    Two-dimensional fast Gabor transform algorithms are useful for real-time applications due to the high computational complexity of the traditional 2-D complex-valued discrete Gabor transform (CDGT). This paper presents two block time-recursive algorithms for 2-D DHT-based real-valued discrete Gabor transform (RDGT) and its inverse transform and develops a fast parallel approach for the implementation of the two algorithms. The computational complexity of the proposed parallel approach is analyzed and compared with that of the existing 2-D CDGT algorithms. The results indicate that the proposed parallel approach is attractive for real time image processing.

  8. Real-time Internet connections: implications for surgical decision making in laparoscopy.

    PubMed

    Broderick, T J; Harnett, B M; Doarn, C R; Rodas, E B; Merrell, R C

    2001-08-01

    To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were "grabbed" from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation.

  9. Methods and decision making on a Mars rover for identification of fossils

    NASA Technical Reports Server (NTRS)

    Eberlein, Susan; Yates, Gigi

    1989-01-01

    A system for automated fusion and interpretation of image data from multiple sensors, including multispectral data from an imaging spectrometer is being developed. Classical artificial intelligence techniques and artificial neural networks are employed to make real time decision based on current input and known scientific goals. Emphasis is placed on identifying minerals which could indicate past life activity or an environment supportive of life. Multispectral data can be used for geological analysis because different minerals have characteristic spectral reflectance in the visible and near infrared range. Classification of each spectrum into a broad class, based on overall spectral shape and locations of absorption bands is possible in real time using artificial neural networks. The goal of the system is twofold: multisensor and multispectral data must be interpreted in real time so that potentially interesting sites can be flagged and investigated in more detail while the rover is near those sites; and the sensed data must be reduced to the most compact form possible without loss of crucial information. Autonomous decision making will allow a rover to achieve maximum scientific benefit from a mission. Both a classical rule based approach and a decision neural network for making real time choices are being considered. Neural nets may work well for adaptive decision making. A neural net can be trained to work in two steps. First, the actual input state is mapped to the closest of a number of memorized states. After weighing the importance of various input parameters, the net produces an output decision based on the matched memory state. Real time, autonomous image data analysis and decision making capabilities are required for achieving maximum scientific benefit from a rover mission. The system under development will enhance the chances of identifying fossils or environments capable of supporting life on Mars

  10. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function.

    PubMed

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2016-06-01

    MRI-guided interventions demand high frame rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real time to interactively deblur spiral images. Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF-predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF-predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 min of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. This real-time distortion correction framework will enable the use of these high frame rate imaging methods for MRI-guided interventions. Magn Reson Med 75:2278-2285, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Real-time distortion correction of spiral and echo planar images using the gradient system impulse response function

    PubMed Central

    Campbell-Washburn, Adrienne E; Xue, Hui; Lederman, Robert J; Faranesh, Anthony Z; Hansen, Michael S

    2015-01-01

    Purpose MRI-guided interventions demand high frame-rate imaging, making fast imaging techniques such as spiral imaging and echo planar imaging (EPI) appealing. In this study, we implemented a real-time distortion correction framework to enable the use of these fast acquisitions for interventional MRI. Methods Distortions caused by gradient waveform inaccuracies were corrected using the gradient impulse response function (GIRF), which was measured by standard equipment and saved as a calibration file on the host computer. This file was used at runtime to calculate the predicted k-space trajectories for image reconstruction. Additionally, the off-resonance reconstruction frequency was modified in real-time to interactively de-blur spiral images. Results Real-time distortion correction for arbitrary image orientations was achieved in phantoms and healthy human volunteers. The GIRF predicted k-space trajectories matched measured k-space trajectories closely for spiral imaging. Spiral and EPI image distortion was visibly improved using the GIRF predicted trajectories. The GIRF calibration file showed no systematic drift in 4 months and was demonstrated to correct distortions after 30 minutes of continuous scanning despite gradient heating. Interactive off-resonance reconstruction was used to sharpen anatomical boundaries during continuous imaging. Conclusions This real-time distortion correction framework will enable the use of these high frame-rate imaging methods for MRI-guided interventions. PMID:26114951

  12. Research of real-time video processing system based on 6678 multi-core DSP

    NASA Astrophysics Data System (ADS)

    Li, Xiangzhen; Xie, Xiaodan; Yin, Xiaoqiang

    2017-10-01

    In the information age, the rapid development in the direction of intelligent video processing, complex algorithm proposed the powerful challenge on the performance of the processor. In this article, through the FPGA + TMS320C6678 frame structure, the image to fog, merge into an organic whole, to stabilize the image enhancement, its good real-time, superior performance, break through the traditional function of video processing system is simple, the product defects such as single, solved the video application in security monitoring, video, etc. Can give full play to the video monitoring effectiveness, improve enterprise economic benefits.

  13. Observations of breakup processes of liquid jets using real-time X-ray radiography

    NASA Technical Reports Server (NTRS)

    Char, J. M.; Kuo, K. K.; Hsieh, K. C.

    1988-01-01

    To unravel the liquid-jet breakup process in the nondilute region, a newly developed system of real-time X-ray radiography, an advanced digital image processor, and a high-speed video camera were used. Based upon recorded X-ray images, the inner structure of a liquid jet during breakup was observed. The jet divergence angle, jet breakup length, and fraction distributions along the axial and transverse directions of the liquid jets were determined in the near-injector region. Both wall- and free-jet tests were conducted to study the effect of wall friction on the jet breakup process.

  14. Implementation of a Real-Time Stacking Algorithm in a Photogrammetric Digital Camera for Uavs

    NASA Astrophysics Data System (ADS)

    Audi, A.; Pierrot-Deseilligny, M.; Meynard, C.; Thom, C.

    2017-08-01

    In the recent years, unmanned aerial vehicles (UAVs) have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery) need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn't seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation estimated by a photogrammetric method between poses, which can be used to recalibrate in real-time the gyrometers of the IMU.

  15. TH-EF-BRA-05: A Method of Near Real-Time 4D MRI Using Volumetric Dynamic Keyhole (VDK) in the Presence of Respiratory Motion for MR-Guided Radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, B; Kim, S; Kim, T

    Purpose: To develop a novel method that enables 4D MR imaging in near real-time for continuous monitoring of tumor motion in MR-guided radiotherapy. Methods: This method is mainly based on an idea of expanding dynamic keyhole to full volumetric imaging acquisition. In the VDK approach introduced in this study, a library of peripheral volumetric k-space data is generated in given number of phases (5 and 10 in this study) in advance. For 4D MRI at any given time, only volumetric central k-space data are acquired in real-time and combined with pre-acquired peripheral volumetric k-space data in the library corresponding tomore » the respiratory phase (or amplitude). The combined k-space data are Fourier-transformed to MR images. For simulation study, an MRXCAT program was used to generate synthetic MR images of the thorax with desired respiratory motion, contrast levels, and spatial and temporal resolution. 20 phases of volumetric MR images, with 200 ms temporal resolution in 4 s respiratory period, were generated using balanced steady-state free precession MR pulse sequence. The total acquisition time was 21.5s/phase with a voxel size of 3×3×5 mm{sup 3} and an image matrix of 128×128×56. Image similarity was evaluated with difference maps between the reference and reconstructed images. The VDK, conventional keyhole, and zero filling methods were compared for this simulation study. Results: Using 80% of the ky data and 70% of the kz data from the library resulted in 12.20% average intensity difference from the reference, and 21.60% and 28.45% difference in threshold pixel difference for conventional keyhole and zero filling, respectively. The imaging time will be reduced from 21.5s to 1.3s per volume using the VDK method. Conclusion: Near real-time 4D MR imaging can be achieved using the volumetric dynamic keyhole method. That makes the possibility of utilizing 4D MRI during MR-guided radiotherapy.« less

  16. Image processing with cellular nonlinear networks implemented on field-programmable gate arrays for real-time applications in nuclear fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palazzo, S.; Vagliasindi, G.; Arena, P.

    2010-08-15

    In the past years cameras have become increasingly common tools in scientific applications. They are now quite systematically used in magnetic confinement fusion, to the point that infrared imaging is starting to be used systematically for real-time machine protection in major devices. However, in order to guarantee that the control system can always react rapidly in case of critical situations, the time required for the processing of the images must be as predictable as possible. The approach described in this paper combines the new computational paradigm of cellular nonlinear networks (CNNs) with field-programmable gate arrays and has been tested inmore » an application for the detection of hot spots on the plasma facing components in JET. The developed system is able to perform real-time hot spot recognition, by processing the image stream captured by JET wide angle infrared camera, with the guarantee that computational time is constant and deterministic. The statistical results obtained from a quite extensive set of examples show that this solution approximates very well an ad hoc serial software algorithm, with no false or missed alarms and an almost perfect overlapping of alarm intervals. The computational time can be reduced to a millisecond time scale for 8 bit 496x560-sized images. Moreover, in our implementation, the computational time, besides being deterministic, is practically independent of the number of iterations performed by the CNN - unlike software CNN implementations.« less

  17. Near Real-Time Monitoring of Forest Disturbance: A Multi-Sensor Remote Sensing Approach and Assessment Framework

    NASA Astrophysics Data System (ADS)

    Tang, Xiaojing

    Fast and accurate monitoring of tropical forest disturbance is essential for understanding current patterns of deforestation as well as helping eliminate illegal logging. This dissertation explores the use of data from different satellites for near real-time monitoring of forest disturbance in tropical forests, including: development of new monitoring methods; development of new assessment methods; and assessment of the performance and operational readiness of existing methods. Current methods for accuracy assessment of remote sensing products do not address the priority of near real-time monitoring of detecting disturbance events as early as possible. I introduce a new assessment framework for near real-time products that focuses on the timing and the minimum detectable size of disturbance events. The new framework reveals the relationship between change detection accuracy and the time needed to identify events. In regions that are frequently cloudy, near real-time monitoring using data from a single sensor is difficult. This study extends the work by Xin et al. (2013) and develops a new time series method (Fusion2) based on fusion of Landsat and MODIS (Moderate Resolution Imaging Spectroradiometer) data. Results of three test sites in the Amazon Basin show that Fusion2 can detect 44.4% of the forest disturbance within 13 clear observations (82 days) after the initial disturbance. The smallest event detected by Fusion2 is 6.5 ha. Also, Fusion2 detects disturbance faster and has less commission error than more conventional methods. In a comparison of coarse resolution sensors, MODIS Terra and Aqua combined provides faster and more accurate detection of disturbance events than VIIRS (Visible Infrared Imaging Radiometer Suite) and MODIS single sensor data. The performance of near real-time monitoring using VIIRS is slightly worse than MODIS Terra but significantly better than MODIS Aqua. New monitoring methods developed in this dissertation provide forest protection organizations the capacity to monitor illegal logging events promptly. In the future, combining two Landsat and two Sentinel-2 satellites will provide global coverage at 30 m resolution every 4 days, and routine monitoring may be possible at high resolution. The methods and assessment framework developed in this dissertation are adaptable to newly available datasets.

  18. Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusion

  19. Ultrasound image guidance of cardiac interventions

    NASA Astrophysics Data System (ADS)

    Peters, Terry M.; Pace, Danielle F.; Lang, Pencilla; Guiraudon, Gérard M.; Jones, Douglas L.; Linte, Cristian A.

    2011-03-01

    Surgical procedures often have the unfortunate side-effect of causing the patient significant trauma while accessing the target site. Indeed, in some cases the trauma inflicted on the patient during access to the target greatly exceeds that caused by performing the therapy. Heart disease has traditionally been treated surgically using open chest techniques with the patient being placed "on pump" - i.e. their circulation being maintained by a cardio-pulmonary bypass or "heart-lung" machine. Recently, techniques have been developed for performing minimally invasive interventions on the heart, obviating the formerly invasive procedures. These new approaches rely on pre-operative images, combined with real-time images acquired during the procedure. Our approach is to register intra-operative images to the patient, and use a navigation system that combines intra-operative ultrasound with virtual models of instrumentation that has been introduced into the chamber through the heart wall. This paper illustrates the problems associated with traditional ultrasound guidance, and reviews the state of the art in real-time 3D cardiac ultrasound technology. In addition, it discusses the implementation of an image-guided intervention platform that integrates real-time ultrasound with a virtual reality environment, bringing together the pre-operative anatomy derived from MRI or CT, representations of tracked instrumentation inside the heart chamber, and the intra-operatively acquired ultrasound images.

  20. Recent Developments in Hyperspectral Imaging for Assessment of Food Quality and Safety

    PubMed Central

    Huang, Hui; Liu, Li; Ngadi, Michael O.

    2014-01-01

    Hyperspectral imaging which combines imaging and spectroscopic technology is rapidly gaining ground as a non-destructive, real-time detection tool for food quality and safety assessment. Hyperspectral imaging could be used to simultaneously obtain large amounts of spatial and spectral information on the objects being studied. This paper provides a comprehensive review on the recent development of hyperspectral imaging applications in food and food products. The potential and future work of hyperspectral imaging for food quality and safety control is also discussed. PMID:24759119

  1. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    NASA Technical Reports Server (NTRS)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  2. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery

    PubMed Central

    Qi, Baogui; Zhuang, Yin; Chen, He; Chen, Liang

    2018-01-01

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited. PMID:29693585

  3. On-Board, Real-Time Preprocessing System for Optical Remote-Sensing Imagery.

    PubMed

    Qi, Baogui; Shi, Hao; Zhuang, Yin; Chen, He; Chen, Liang

    2018-04-25

    With the development of remote-sensing technology, optical remote-sensing imagery processing has played an important role in many application fields, such as geological exploration and natural disaster prevention. However, relative radiation correction and geometric correction are key steps in preprocessing because raw image data without preprocessing will cause poor performance during application. Traditionally, remote-sensing data are downlinked to the ground station, preprocessed, and distributed to users. This process generates long delays, which is a major bottleneck in real-time applications for remote-sensing data. Therefore, on-board, real-time image preprocessing is greatly desired. In this paper, a real-time processing architecture for on-board imagery preprocessing is proposed. First, a hierarchical optimization and mapping method is proposed to realize the preprocessing algorithm in a hardware structure, which can effectively reduce the computation burden of on-board processing. Second, a co-processing system using a field-programmable gate array (FPGA) and a digital signal processor (DSP; altogether, FPGA-DSP) based on optimization is designed to realize real-time preprocessing. The experimental results demonstrate the potential application of our system to an on-board processor, for which resources and power consumption are limited.

  4. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  5. On-line monitoring of fluid bed granulation by photometric imaging.

    PubMed

    Soppela, Ira; Antikainen, Osmo; Sandler, Niklas; Yliruusi, Jouko

    2014-11-01

    This paper introduces and discusses a photometric surface imaging approach for on-line monitoring of fluid bed granulation. Five granule batches consisting of paracetamol and varying amounts of lactose and microcrystalline cellulose were manufactured with an instrumented fluid bed granulator. Photometric images and NIR spectra were continuously captured on-line and particle size information was extracted from them. Also key process parameters were recorded. The images provided direct real-time information on the growth, attrition and packing behaviour of the batches. Moreover, decreasing image brightness in the drying phase was found to indicate granule drying. The changes observed in the image data were also linked to the moisture and temperature profiles of the processes. Combined with complementary process analytical tools, photometric imaging opens up possibilities for improved real-time evaluation fluid bed granulation. Furthermore, images can give valuable insight into the behaviour of excipients or formulations during product development. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Real-time in vivo imaging of human lymphatic system using an LED-based photoacoustic/ultrasound imaging system

    NASA Astrophysics Data System (ADS)

    Kuniyil Ajith Singh, Mithun; Agano, Toshitaka; Sato, Naoto; Shigeta, Yusuke; Uemura, Tetsuji

    2018-02-01

    Non-invasive in vivo imaging of lymphatic system is of paramount importance for analyzing the functions of lymphatic vessels, and for investigating their contribution to metastasis. Recently, we introduced a multi-wavelength real-time LED-based photoacoustic/ultrasound system (AcousticX). In this work, for the first time, we demonstrate that AcousticX is capable of real-time imaging of human lymphatic system. Results demonstrate the capability of this system to image vascular and lymphatic vessels simultaneously. This could potentially provide detailed information regarding the interconnected roles of lymphatic and vascular systems in various diseases, therefore fostering the growth of therapeutic interventions.

  7. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization.

    PubMed

    Bernal-Rusiel, Jorge L; Rannou, Nicolas; Gollub, Randy L; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView , a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution.

  8. Development of a real-time wave field reconstruction TEM system (II): correction of coma aberration and 3-fold astigmatism, and real-time correction of 2-fold astigmatism.

    PubMed

    Tamura, Takahiro; Kimura, Yoshihide; Takai, Yoshizo

    2018-02-01

    In this study, a function for the correction of coma aberration, 3-fold astigmatism and real-time correction of 2-fold astigmatism was newly incorporated into a recently developed real-time wave field reconstruction TEM system. The aberration correction function was developed by modifying the image-processing software previously designed for auto focus tracking, as described in the first article of this series. Using the newly developed system, the coma aberration and 3-fold astigmatism were corrected using the aberration coefficients obtained experimentally before the processing was carried out. In this study, these aberration coefficients were estimated from an apparent 2-fold astigmatism induced under tilted-illumination conditions. In contrast, 2-fold astigmatism could be measured and corrected in real time from the reconstructed wave field. Here, the measurement precision for 2-fold astigmatism was found to be ±0.4 nm and ±2°. All of these aberration corrections, as well as auto focus tracking, were performed at a video frame rate of 1/30 s. Thus, the proposed novel system is promising for quantitative and reliable in situ observations, particularly in environmental TEM applications.

  9. MatMRI and MatHIFU: software toolboxes for real-time monitoring and control of MR-guided HIFU

    PubMed Central

    2013-01-01

    Background The availability of open and versatile software tools is a key feature to facilitate pre-clinical research for magnetic resonance imaging (MRI) and magnetic resonance-guided high-intensity focused ultrasound (MR-HIFU) and expedite clinical translation of diagnostic and therapeutic medical applications. In the present study, two customizable software tools that were developed at the Thunder Bay Regional Research Institute are presented for use with both MRI and MR-HIFU. Both tools operate in a MATLAB®; environment. The first tool is named MatMRI and enables real-time, dynamic acquisition of MR images with a Philips MRI scanner. The second tool is named MatHIFU and enables the execution and dynamic modification of user-defined treatment protocols with the Philips Sonalleve MR-HIFU therapy system to perform ultrasound exposures in MR-HIFU therapy applications. Methods MatMRI requires four basic steps: initiate communication, subscribe to MRI data, query for new images, and unsubscribe. MatMRI can also pause/resume the imaging and perform real-time updates of the location and orientation of images. MatHIFU requires four basic steps: initiate communication, prepare treatment protocol, and execute treatment protocol. MatHIFU can monitor the state of execution and, if required, modify the protocol in real time. Results Four applications were developed to showcase the capabilities of MatMRI and MatHIFU to perform pre-clinical research. Firstly, MatMRI was integrated with an existing small animal MR-HIFU system (FUS Instruments, Toronto, Ontario, Canada) to provide real-time temperature measurements. Secondly, MatMRI was used to perform T2-based MR thermometry in the bone marrow. Thirdly, MatHIFU was used to automate acoustic hydrophone measurements on a per-element basis of the 256-element transducer of the Sonalleve system. Finally, MatMRI and MatHIFU were combined to produce and image a heating pattern that recreates the word ‘HIFU’ in a tissue-mimicking heating phantom. Conclusions MatMRI and MatHIFU leverage existing MRI and MR-HIFU clinical platforms to facilitate pre-clinical research. MatMRI substantially simplifies the real-time acquisition and processing of MR data. MatHIFU facilitates the testing and characterization of new therapy applications using the Philips Sonalleve clinical MR-HIFU system. Under coordination with Philips Healthcare, both MatMRI and MatHIFU are intended to be freely available as open-source software packages to other research groups. PMID:25512856

  10. Magnetic particle imaging: advancements and perspectives for real-time in vivo monitoring and image-guided therapy

    NASA Astrophysics Data System (ADS)

    Pablico-Lansigan, Michele H.; Situ, Shu F.; Samia, Anna Cristina S.

    2013-05-01

    Magnetic particle imaging (MPI) is an emerging biomedical imaging technology that allows the direct quantitative mapping of the spatial distribution of superparamagnetic iron oxide nanoparticles. MPI's increased sensitivity and short image acquisition times foster the creation of tomographic images with high temporal and spatial resolution. The contrast and sensitivity of MPI is envisioned to transcend those of other medical imaging modalities presently used, such as magnetic resonance imaging (MRI), X-ray scans, ultrasound, computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography (SPECT). In this review, we present an overview of the recent advances in the rapidly developing field of MPI. We begin with a basic introduction of the fundamentals of MPI, followed by some highlights over the past decade of the evolution of strategies and approaches used to improve this new imaging technique. We also examine the optimization of iron oxide nanoparticle tracers used for imaging, underscoring the importance of size homogeneity and surface engineering. Finally, we present some future research directions for MPI, emphasizing the novel and exciting opportunities that it offers as an important tool for real-time in vivo monitoring. All these opportunities and capabilities that MPI presents are now seen as potential breakthrough innovations in timely disease diagnosis, implant monitoring, and image-guided therapeutics.

  11. Real-Time 3D Fluoroscopy-Guided Large Core Needle Biopsy of Renal Masses: A Critical Early Evaluation According to the IDEAL Recommendations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kroeze, Stephanie G. C.; Huisman, Merel; Verkooijen, Helena M.

    2012-06-15

    Introduction: Three-dimensional (3D) real-time fluoroscopy cone beam CT is a promising new technique for image-guided biopsy of solid tumors. We evaluated the technical feasibility, diagnostic accuracy, and complications of this technique for guidance of large-core needle biopsy in patients with suspicious renal masses. Methods: Thirteen patients with 13 suspicious renal masses underwent large-core needle biopsy under 3D real-time fluoroscopy cone beam CT guidance. Imaging acquisition and subsequent 3D reconstruction was done by a mobile flat-panel detector (FD) C-arm system to plan the needle path. Large-core needle biopsies were taken by the interventional radiologist. Technical success, accuracy, and safety were evaluatedmore » according to the Innovation, Development, Exploration, Assessment, Long-term study (IDEAL) recommendations. Results: Median tumor size was 2.6 (range, 1.0-14.0) cm. In ten (77%) patients, the histological diagnosis corresponded to the imaging findings: five were malignancies, five benign lesions. Technical feasibility was 77% (10/13); in three patients biopsy results were inconclusive. The lesion size of these three patients was <2.5 cm. One patient developed a minor complication. Median follow-up was 16.0 (range, 6.4-19.8) months. Conclusions: 3D real-time fluoroscopy cone beam CT-guided biopsy of renal masses is feasible and safe. However, these first results suggest that diagnostic accuracy may be limited in patients with renal masses <2.5 cm.« less

  12. Real-time PM10 concentration monitoring on Penang Bridge by using traffic monitoring CCTV

    NASA Astrophysics Data System (ADS)

    Low, K. L.; Lim, H. S.; MatJafri, M. Z.; Abdullah, K.; Wong, C. J.

    2007-04-01

    For this study, an algorithm was developed to determine concentration of particles less than 10μm (PM10) from still images captured by a CCTV camera on the Penang Bridge. The objective of this study is to remotely monitor the PM10 concentrations on the Penang Bridge through the internet. So, an algorithm was developed based on the relationship between the atmospheric reflectance and the corresponding air quality. By doing this, the still images were separated into three bands namely red, green and blue and their digital number values were determined. A special transformation was then performed to the data. Ground PM10 measurements were taken by using DustTrak TM meter. The algorithm was calibrated using a regression analysis. The proposed algorithm produced a high correlation coefficient (R) and low root-mean-square error (RMS) between the measured and produced PM10. Later, a program was written by using Microsoft Visual Basic 6.0 to download still images from the camera over the internet and implement the newly developed algorithm. Meanwhile, the program is running in real time and the public will know the air pollution index from time to time. This indicates that the technique using the CCTV camera images can provide a useful tool for air quality studies.

  13. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    NASA Astrophysics Data System (ADS)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  14. Dual-dimensional microscopy: real-time in vivo three-dimensional observation method using high-resolution light-field microscopy and light-field display.

    PubMed

    Kim, Jonghyun; Moon, Seokil; Jeong, Youngmo; Jang, Changwon; Kim, Youngmin; Lee, Byoungho

    2018-06-01

    Here, we present dual-dimensional microscopy that captures both two-dimensional (2-D) and light-field images of an in-vivo sample simultaneously, synthesizes an upsampled light-field image in real time, and visualizes it with a computational light-field display system in real time. Compared with conventional light-field microscopy, the additional 2-D image greatly enhances the lateral resolution at the native object plane up to the diffraction limit and compensates for the image degradation at the native object plane. The whole process from capturing to displaying is done in real time with the parallel computation algorithm, which enables the observation of the sample's three-dimensional (3-D) movement and direct interaction with the in-vivo sample. We demonstrate a real-time 3-D interactive experiment with Caenorhabditis elegans. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  15. Real-time biscuit tile image segmentation method based on edge detection.

    PubMed

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  16. Real Time Characterization of Solid/Liquid Interfaces During Directional Solidification

    NASA Technical Reports Server (NTRS)

    Sen, S.; Kaukler, W. K.; Curreri, P. A.; Peters, P.

    1997-01-01

    A X-Ray Transmission Microscope (XTM) has been developed to observe in real time and in-situ solidification phenomenon at the solid/liquid interface. Recent improvements in the horizontal Bridgman furnace design provides real-time magnification (during solidification) up to 12OX. The increased magnification has enabled for the first time the XTM imaging of real-time growth of fibers and particles with diameters of 3-6 micrometers. Further, morphological transitions from planar to cellular interfaces have also been imaged. Results from recent XTM studies on Al-Bi monotectic system, Al-Au eutectic system and interaction of insoluble particles with s/I interfaces in composite materials will be presented. An important parameter during directional solidification of molten metal is the interfacial undercooling. This parameter controls the morphology and composition at the s/I interface. Conventional probes such as thermocouples, due to their large bead size, do not have sufficient resolution for measuring undercooling at the s/I interface. Further, the intrusive nature of the thermocouples also distorts the thermal field at the s/I interface. To overcome these inherent problems we have recently developed a compact furnace which utilizes a non-intrusive technique (Seebeck) to measure undercooling at the S/I interface. Recent interfacial undercooling measurements obtained for the Pb-Sn system will be presented. The Seebeck measurement furnace in the future will be integrated with the XTM to provide the most comprehensive tool for real time characterization of s/I interfaces during solidification.

  17. Microbubble responses to a similar mechanical index with different real-time perfusion imaging techniques.

    PubMed

    Porter, Thomas R; Oberdorfer, Joseph; Rafter, Patrick; Lof, John; Xie, Feng

    2003-08-01

    The purpose of this study was to determine differences in contrast enhancement and microbubble destruction rates with current commercially available low-mechanical index (MI) real-time perfusion imaging modalities. A tissue-mimicking phantom was developed that had vessels at 3 cm (near field) and 9 cm (far field) from a real-time transducer. Perfluorocarbon-exposed sonicated dextrose albumin microbubbles (PESDA) were injected proximal to a mixing chamber, and then passed through these vessels while the region was insonified with either pulses of alternating polarity with pulse inversion Doppler (PID) or pulses of alternating amplitude by power modulation (PM) at MIs of 0.1, 0.2 and 0.3. Effluent microbubble concentration, contrast intensity and the slope of digital contrast intensity vs. time were measured. Our results demonstrated that microbubble destruction already occurs with PID at an MI of 0.1. Contrast intensity seen with PID was less than with PM. Therefore, differences in contrast enhancement and microbubble destruction rates occur at a similar MI setting when using different real-time pulse sequence schemes.

  18. Development of a muon radiographic imaging electronic board system for a stable solar power operation

    NASA Astrophysics Data System (ADS)

    Uchida, T.; Tanaka, H. K. M.; Tanaka, M.

    2010-02-01

    Cosmic-ray muon radiography is a method that is used to study the internal structure of volcanoes. We have developed a muon radiographic imaging board with a power consumption low enough to be powered by a small solar power system. The imaging board generates an angular distribution of the muons. Used for real-time reading, the method may facilitate the prediction of eruptions. For real-time observations, the Ethernet is employed, and the board works as a web server for a remote operation. The angular distribution can be obtained from a remote PC via a network using a standard web browser. We have collected and analyzed data obtained from a 3-day field study of cosmic-ray muons at a Satsuma-Iwojima volcano. The data provided a clear image of the mountain ridge as a cosmic-ray muon shadow. The measured performance of the system is sufficient for a stand-alone cosmic-ray muon radiography experiment.

  19. Real-time compound sonography of the rotator-cuff: evaluation of artefact reduction and image definition.

    PubMed

    De Candia, Alessandro; Doratiotto, Stefsano; Paschina, Elio; Segatto, Enrica; Pelizzo, Francesco; Bazzocchi, Massimo

    2003-04-01

    The aim of this study was to compare real time compound sonography with conventional sonography in the evaluation of rotator cuff tears. A prospective study was performed on 50 supraspinatus tendons in 101 patients treated by surgical acromioplasty. The surgeon described 33 (66%) full-thickness tears and 17 (34%) partial-thickness tears. All tendons were examined by conventional sonography and real time compound sonography on the day before surgery. The techniques were compared by evaluating the images for freedom from artefacts, contrast resolution and overall image definition. Real time compound sonography proved to be superior to conventional sonography as regards freedom from artefacts in 50 cases out of 50 (100%). It was superior to conventional sonography in evaluating the image contrast resolution in 45 cases out of 50 (90%), and superior to conventional sonography in overall image definition in 45 out of 50 cases (90%). Real-time compound sonography reduces the intrinsic artefacts of conventional sonography and allows better overall image definition. In particular, the digital technique allowed us to study the rotator cuff with better contrast resolution and sharper and more detailed images than did conventional sonography.

  20. Application of Linear Array Imaging Techniques to the Real-Time Inspection of Airframe Structures and Substructures

    NASA Technical Reports Server (NTRS)

    Miller, James G. (Principal Investigator)

    1996-01-01

    Current concern for ensuring, the air-worthiness of the aging commercial air fleet has prompted the establishment of broad-agency programs to develop NDT technologies that address specific aging-aircraft issues. One of the crucial technological needs that has been identified is the development of rapid, quantitative systems for depot-level inspection of bonded aluminum lap joints on aircraft. Research results for characterization of disbond and corrosion based on normal-incidence pulse-echo measurement geometries are showing promise, but are limited by the single-site nature of the measurement which requires manual or mechanical scanning to inspect an area. One approach to developing efficient systems may be to transfer specific aspects of current medical imaging technology to the NDT arena. Ultrasonic medical imaging, systems offer many desirable attributes for large scale inspection. They are portable, provide real-time imaging, and have integrated video tape recorder and printer capabilities available for documentation and post-inspection review. Furthermore, these systems are available at a relatively low cost (approximately $50,000 to $200,000) and can be optimized for use with metals with straight-forward modifications.

  1. Non-invasive thermal IR detection of breast tumor development in vivo

    NASA Astrophysics Data System (ADS)

    Case, Jason R.; Young, Madison A.; Dréau, D.; Trammell, Susan R.

    2015-03-01

    Lumpectomy coupled with radiation therapy and/or chemotherapy comprises the treatment of breast cancer for many patients. We are developing an enhanced thermal IR imaging technique that can be used in real-time to guide tissue excision during a lumpectomy. This novel enhanced thermal imaging method is a combination of IR imaging (8- 10 μm) and selective heating of blood (~0.5 °C) relative to surrounding water-rich tissue using LED sources at low powers. Post-acquisition processing of these images highlights temporal changes in temperature and is sensitive to the presence of vascular structures. In this study, fluorescent and enhanced thermal imaging modalities were used to estimate breast cancer tumor volumes as a function of time in 19 murine subjects over a 30-day study period. Tumor volumes calculated from fluorescent imaging follow an exponential growth curve for the first 22 days of the study. Cell necrosis affected the tumor volume estimates based on the fluorescent images after Day 22. The tumor volumes estimated from enhanced thermal imaging show exponential growth over the entire study period. A strong correlation was found between tumor volumes estimated using fluorescent imaging and the enhanced IR images, indicating that enhanced thermal imaging is capable monitoring tumor growth. Further, the enhanced IR images reveal a corona of bright emission along the edges of the tumor masses. This novel IR technique could be used to estimate tumor margins in real-time during surgical procedures.

  2. Real-Time Symbol Extraction From Grey-Level Images

    NASA Astrophysics Data System (ADS)

    Massen, R.; Simnacher, M.; Rosch, J.; Herre, E.; Wuhrer, H. W.

    1988-04-01

    A VME-bus image pipeline processor for extracting vectorized contours from grey-level images in real-time is presented. This 3 Giga operation per second processor uses large kernel convolvers and new non-linear neighbourhood processing algorithms to compute true 1-pixel wide and noise-free contours without thresholding even from grey-level images with quite varying edge sharpness. The local edge orientation is used as an additional cue to compute a list of vectors describing the closed and open contours in real-time and to dump a CAD-like symbolic image description into a symbol memory at pixel clock rate.

  3. Low-complexity camera digital signal imaging for video document projection system

    NASA Astrophysics Data System (ADS)

    Hsia, Shih-Chang; Tsai, Po-Shien

    2011-04-01

    We present high-performance and low-complexity algorithms for real-time camera imaging applications. The main functions of the proposed camera digital signal processing (DSP) involve color interpolation, white balance, adaptive binary processing, auto gain control, and edge and color enhancement for video projection systems. A series of simulations demonstrate that the proposed method can achieve good image quality while keeping computation cost and memory requirements low. On the basis of the proposed algorithms, the cost-effective hardware core is developed using Verilog HDL. The prototype chip has been verified with one low-cost programmable device. The real-time camera system can achieve 1270 × 792 resolution with the combination of extra components and can demonstrate each DSP function.

  4. MACS-Mar: a real-time remote sensing system for maritime security applications

    NASA Astrophysics Data System (ADS)

    Brauchle, Jörg; Bayer, Steven; Hein, Daniel; Berger, Ralf; Pless, Sebastian

    2018-04-01

    The modular aerial camera system (MACS) is a development platform for optical remote sensing concepts, algorithms and special environments. For real-time services for maritime security (EMSec joint project), a new multi-sensor configuration MACS-Mar was realized. It consists of four co-aligned sensor heads in the visible RGB, near infrared (NIR, 700-950 nm), hyperspectral (HS, 450-900 nm) and thermal infrared (TIR, 7.5-14 µm) spectral range, a mid-cost navigation system, a processing unit and two data links. On-board image projection, cropping of redundant data and compression enable the instant generation of direct-georeferenced high-resolution image mosaics, automatic object detection, vectorization and annotation of floating objects on the water surface. The results were transmitted over a distance up to 50 km in real-time via narrow and broadband data links and were visualized in a maritime situation awareness system. For the automatic onboard detection of floating objects, a segmentation and classification workflow based on RGB, IR and TIR information was developed and tested. The completeness of the object detection in the experiment resulted in 95%, the correctness in 53%. Mostly, bright backwash of ships lead to an overestimation of the number of objects, further refinement using water homogeneity in the TIR, as implemented in the workflow, couldn't be carried out due to problems with the TIR sensor, else distinctly better results could have been expected. The absolute positional accuracy of the projected real-time imagery resulted in 2 m without postprocessing of images or navigation data, the relative measurement accuracy of distances is in the range of the image resolution, which is about 12 cm for RGB imagery in the EMSec experiment.

  5. Digital Image Support in the ROADNet Real-time Monitoring Platform

    NASA Astrophysics Data System (ADS)

    Lindquist, K. G.; Hansen, T. S.; Newman, R. L.; Vernon, F. L.; Nayak, A.; Foley, S.; Fricke, T.; Orcutt, J.; Rajasekar, A.

    2004-12-01

    The ROADNet real-time monitoring infrastructure has allowed researchers to integrate geophysical monitoring data from a wide variety of signal domains. Antelope-based data transport, relational-database buffering and archiving, backup/replication/archiving through the Storage Resource Broker, and a variety of web-based distribution tools create a powerful monitoring platform. In this work we discuss our use of the ROADNet system for the collection and processing of digital image data. Remote cameras have been deployed at approximately 32 locations as of September 2004, including the SDSU Santa Margarita Ecological Reserve, the Imperial Beach pier, and the Pinon Flats geophysical observatory. Fire monitoring imagery has been obtained through a connection to the HPWREN project. Near-real-time images obtained from the R/V Roger Revelle include records of seafloor operations by the JASON submersible, as part of a maintenance mission for the H2O underwater seismic observatory. We discuss acquisition mechanisms and the packet architecture for image transport via Antelope orbservers, including multi-packet support for arbitrarily large images. Relational database storage supports archiving of timestamped images, image-processing operations, grouping of related images and cameras, support for motion-detect triggers, thumbnail images, pre-computed video frames, support for time-lapse movie generation and storage of time-lapse movies. Available ROADNet monitoring tools include both orbserver-based display of incoming real-time images and web-accessible searching and distribution of images and movies driven by the relational database (http://mercali.ucsd.edu/rtapps/rtimbank.php). An extension to the Kepler Scientific Workflow System also allows real-time image display via the Ptolemy project. Custom time-lapse movies may be made from the ROADNet web pages.

  6. Fast optically sectioned fluorescence HiLo endomicroscopy.

    PubMed

    Ford, Tim N; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  7. Fast optically sectioned fluorescence HiLo endomicroscopy

    NASA Astrophysics Data System (ADS)

    Ford, Tim N.; Lim, Daryl; Mertz, Jerome

    2012-02-01

    We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies.

  8. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  9. Design and evaluation of an intraocular B-scan OCT-guided 36-gauge needle

    NASA Astrophysics Data System (ADS)

    Shen, Jin H.; Joos, Karen M.

    2015-03-01

    Optical coherence tomography imaging is widely used in ophthalmology and optometry clinics for diagnosing retinal disorders. External microscope-mounted OCT operating room systems have imaged retinal changes immediately following surgical manipulations. However, the goal is to image critical surgical maneuvers in real time. External microscope-mounted OCT systems have some limitations with problems tracking constantly moving intraocular surgical instruments, and formation of absolute shadows by the metallic surgical instruments upon the underlying tissues of interest. An intraocular OCT-imaging probe was developed to resolve these problems. A disposable 25-gauge probe tip extended beyond the handpiece, with a 36-gauge needle welded to a disposable tip with its end extending an additional 3.5 mm. A sealed 0.35 mm diameter GRIN lens protected the fiber scanner and focused the scanning beam at a 3 to 4 mm distance. The OCT engine was a very high-resolution spectral-domain optical coherence tomography (SDOCT) system (870 nm, Bioptigen, Inc. Durham, NC) which produced 2000 A-scan lines per B-scan image at a frequency of 5 Hz with the fiber optic oscillations matched to this frequency. Real-time imaging of the needle tip as it touched infrared paper was performed. The B-scan OCT-needle was capable of real-time performance and imaging of the phantom material. In the future, the B-scan OCT-guided needle will be used to perform sub-retinal injections.

  10. Impact of ultrasound video transfer on the practice of ultrasound

    NASA Astrophysics Data System (ADS)

    Duerinckx, Andre J.; Hayrapetian, Alek S.; Grant, Edward G.; Valentino, Daniel J.; Rahbar, Darius; Kiszonas, Mike; Franco, Ricky; Melany, Michelle; Narin, Sherelle L.; Ragavendra, Nagesh

    1996-05-01

    Sonography can be highly dependent on real-time imaging and as such is highly physician intensive. Such situations arise mostly during complicated ultrasound radiology studies or echocardiology examinations. Under those circumstances it would be of benefit to transmit real-time images beyond the immediate area of the ultrasound laboratory when a physician is not on location. We undertook this study to determine if both static and dynamic image transfer to remote locations might be accomplished using an ultrafast ATM network and PACS. Image management of the local image files was performed by a commercial PACS from AGFA corporation. The local network was Ethernet based, and the global network was based on Asynchronous Transfer Mode (ATM, rates up to 100 Mbits/sec). Real-time image transfer involved two teaching hospitals, one of which had 2 separate ultrasound facilities. Radiologists consulted with technologists via telephone while the examinations were being performed. The applications of ATM network providing real time video for ultrasound imaging in a clinical environment and its potential impact on health delivery and clinical teaching. This technology increased technologist and physician productivity due to the elimination of commute time for physicians and waiting time for technologists and patients. Physician confidence in diagnosis increased compared to reviewing static images alone. This system provided instant access for radiologists to real-time scans from remote sites. Image quality and frame rate were equivalent to the original. The system increased productivity by allowing physicians to monitor studies at multiple sites simultaneously.

  11. The Real Time Correction of Stereoscopic Images: From the Serial to a Parallel Treatment

    NASA Astrophysics Data System (ADS)

    Irki, Zohir; Devy, Michel; Achour, Karim; Azzaz, Mohamed Salah

    2008-06-01

    The correction of the stereoscopic images is a task which consists in replacing acquired images by other images having the same properties but which are simpler to use in the other stages of stereovision. The use of the pre-calculated tables, built during an off line calibration step, made it possible to carry out the off line stereoscopic images rectification. An improvement of the built tables made it possible to carry out the real time rectification. In this paper, we describe an improvement of the real time correction approach so it can be exploited for a possible implementation on an FPGA component. This improvement holds in account the real time aspect of the correction and the available resources that can offer the FPGA Type Stratix 1S40F780C5.

  12. Automatic image fusion of real-time ultrasound with computed tomography images: a prospective comparison between two auto-registration methods.

    PubMed

    Cha, Dong Ik; Lee, Min Woo; Kim, Ah Yeong; Kang, Tae Wook; Oh, Young-Taek; Jeong, Ja-Yeon; Chang, Jung-Woo; Ryu, Jiwon; Lee, Kyong Joon; Kim, Jaeil; Bang, Won-Chul; Shin, Dong Kuk; Choi, Sung Jin; Koh, Dalkwon; Seo, Bong Koo; Kim, Kyunga

    2017-11-01

    Background A major drawback of conventional manual image fusion is that the process may be complex, especially for less-experienced operators. Recently, two automatic image fusion techniques called Positioning and Sweeping auto-registration have been developed. Purpose To compare the accuracy and required time for image fusion of real-time ultrasonography (US) and computed tomography (CT) images between Positioning and Sweeping auto-registration. Material and Methods Eighteen consecutive patients referred for planning US for radiofrequency ablation or biopsy for focal hepatic lesions were enrolled. Image fusion using both auto-registration methods was performed for each patient. Registration error, time required for image fusion, and number of point locks used were compared using the Wilcoxon signed rank test. Results Image fusion was successful in all patients. Positioning auto-registration was significantly faster than Sweeping auto-registration for both initial (median, 11 s [range, 3-16 s] vs. 32 s [range, 21-38 s]; P < 0.001] and complete (median, 34.0 s [range, 26-66 s] vs. 47.5 s [range, 32-90]; P = 0.001] image fusion. Registration error of Positioning auto-registration was significantly higher for initial image fusion (median, 38.8 mm [range, 16.0-84.6 mm] vs. 18.2 mm [6.7-73.4 mm]; P = 0.029), but not for complete image fusion (median, 4.75 mm [range, 1.7-9.9 mm] vs. 5.8 mm [range, 2.0-13.0 mm]; P = 0.338]. Number of point locks required to refine the initially fused images was significantly higher with Positioning auto-registration (median, 2 [range, 2-3] vs. 1 [range, 1-2]; P = 0.012]. Conclusion Positioning auto-registration offers faster image fusion between real-time US and pre-procedural CT images than Sweeping auto-registration. The final registration error is similar between the two methods.

  13. Meta-image navigation augmenters for unmanned aircraft systems (MINA for UAS)

    NASA Astrophysics Data System (ADS)

    Òªelik, Koray; Somani, Arun K.; Schnaufer, Bernard; Hwang, Patrick Y.; McGraw, Gary A.; Nadke, Jeremy

    2013-05-01

    GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.

  14. Multiscale nonlinear microscopy and widefield white light imaging enables rapid histological imaging of surgical specimen margins

    PubMed Central

    Giacomelli, Michael G.; Yoshitake, Tadayuki; Cahill, Lucas C.; Vardeh, Hilde; Quintana, Liza M.; Faulkner-Jones, Beverly E.; Brooker, Jeff; Connolly, James L.; Fujimoto, James G.

    2018-01-01

    The ability to histologically assess surgical specimens in real-time is a long-standing challenge in cancer surgery, including applications such as breast conserving therapy (BCT). Up to 40% of women treated with BCT for breast cancer require a repeat surgery due to postoperative histological findings of close or positive surgical margins using conventional formalin fixed paraffin embedded histology. Imaging technologies such as nonlinear microscopy (NLM), combined with exogenous fluorophores can rapidly provide virtual H&E imaging of surgical specimens without requiring microtome sectioning, facilitating intraoperative assessment of margin status. However, the large volume of typical surgical excisions combined with the need for rapid assessment, make comprehensive cellular resolution margin assessment during surgery challenging. To address this limitation, we developed a multiscale, real-time microscope with variable magnification NLM and real-time, co-registered position display using a widefield white light imaging system. Margin assessment can be performed rapidly under operator guidance to image specific regions of interest located using widefield imaging. Using simulated surgical margins dissected from human breast excisions, we demonstrate that multi-centimeter margins can be comprehensively imaged at cellular resolution, enabling intraoperative margin assessment. These methods are consistent with pathology assessment performed using frozen section analysis (FSA), however NLM enables faster and more comprehensive assessment of surgical specimens because imaging can be performed without freezing and cryo-sectioning. Therefore, NLM methods have the potential to be applied to a wide range of intra-operative applications. PMID:29761001

  15. In-vivo, real-time cross-sectional images of retina using a GPU enhanced master slave optical coherence tomography system

    NASA Astrophysics Data System (ADS)

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2016-03-01

    In our previous reports we demonstrated a novel Fourier domain optical coherence tomography method, Master Slave optical coherence tomography (MS-OCT), that does not require resampling of data and can deliver en-face images from several depths simultaneously. While ideally suited for delivering information from a selected depth, the MS-OCT has been so far inferior to the conventional FFT based OCT in terms of time of producing cross section images. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real-time by assembling several T-scans from different depths. We analyze the conditions that ensure a real-time B-scan imaging operation, and demonstrate in-vivo real-time images from human fovea and the optic nerve, of comparable resolution and sensitivity to those produced using the traditional Fourier domain based method.

  16. Design of a portable imager for near-infrared visualization of cutaneous wounds

    PubMed Central

    Peng, Zhaoqiang; Zhou, Jun; Dacy, Ashley; Zhao, Deyin; Kearney, Vasant; Zhou, Weidong; Tang, Liping; Hu, Wenjing

    2017-01-01

    Abstract. A portable imager developed for real-time imaging of cutaneous wounds in research settings is described. The imager consists of a high-resolution near-infrared CCD camera capable of detecting both bioluminescence and fluorescence illuminated by an LED ring with a rotatable filter wheel. All external components are integrated into a compact camera attachment. The device is demonstrated to have competitive performance with a commercial animal imaging enclosure box setup in beam uniformity and sensitivity. Specifically, the device was used to visualize the bioluminescence associated with increased reactive oxygen species activity during the wound healing process in a cutaneous wound inflammation model. In addition, this device was employed to observe the fluorescence associated with the activity of matrix metalloproteinases in a mouse lipopolysaccharide-induced infection model. Our results support the use of the portable imager design as a noninvasive and real-time imaging tool to assess the extent of wound inflammation and infection. PMID:28114448

  17. Towards Automatic Image Segmentation Using Optimised Region Growing Technique

    NASA Astrophysics Data System (ADS)

    Alazab, Mamoun; Islam, Mofakharul; Venkatraman, Sitalakshmi

    Image analysis is being adopted extensively in many applications such as digital forensics, medical treatment, industrial inspection, etc. primarily for diagnostic purposes. Hence, there is a growing interest among researches in developing new segmentation techniques to aid the diagnosis process. Manual segmentation of images is labour intensive, extremely time consuming and prone to human errors and hence an automated real-time technique is warranted in such applications. There is no universally applicable automated segmentation technique that will work for all images as the image segmentation is quite complex and unique depending upon the domain application. Hence, to fill the gap, this paper presents an efficient segmentation algorithm that can segment a digital image of interest into a more meaningful arrangement of regions and objects. Our algorithm combines region growing approach with optimised elimination of false boundaries to arrive at more meaningful segments automatically. We demonstrate this using X-ray teeth images that were taken for real-life dental diagnosis.

  18. Beyond the margins: real-time detection of cancer using targeted fluorophores

    PubMed Central

    Zhang, Ray R.; Schroeder, Alexandra B.; Grudzinski, Joseph J.; Rosenthal, Eben L.; Warram, Jason M.; Pinchuk, Anatoly N.; Eliceiri, Kevin W.; Kuo, John S.; Weichert, Jamey P.

    2017-01-01

    Over the past two decades, synergistic innovations in imaging technology have resulted in a revolution in which a range of biomedical applications are now benefiting from fluorescence imaging. Specifically, advances in fluorophore chemistry and imaging hardware, and the identification of targetable biomarkers have now positioned intraoperative fluorescence as a highly specific real-time detection modality for surgeons in oncology. In particular, the deeper tissue penetration and limited autofluorescence of near-infrared (NIR) fluorescence imaging improves the translational potential of this modality over visible-light fluorescence imaging. Rapid developments in fluorophores with improved characteristics, detection instrumentation, and targeting strategies led to the clinical testing in the early 2010s of the first targeted NIR fluorophores for intraoperative cancer detection. The foundations for the advances that underline this technology continue to be nurtured by the multidisciplinary collaboration of chemists, biologists, engineers, and clinicians. In this Review, we highlight the latest developments in NIR fluorophores, cancer-targeting strategies, and detection instrumentation for intraoperative cancer detection, and consider the unique challenges associated with their effective application in clinical settings. PMID:28094261

  19. Instant Grainification: Real-Time Grain-Size Analysis from Digital Images in the Field

    NASA Astrophysics Data System (ADS)

    Rubin, D. M.; Chezar, H.

    2007-12-01

    Over the past few years, digital cameras and underwater microscopes have been developed to collect in-situ images of sand-sized bed sediment, and software has been developed to measure grain size from those digital images (Chezar and Rubin, 2004; Rubin, 2004; Rubin et al., 2006). Until now, all image processing and grain- size analysis was done back in the office where images were uploaded from cameras and processed on desktop computers. Computer hardware has become small and rugged enough to process images in the field, which for the first time allows real-time grain-size analysis of sand-sized bed sediment. We present such a system consisting of weatherproof tablet computer, open source image-processing software (autocorrelation code of Rubin, 2004, running under Octave and Cygwin), and digital camera with macro lens. Chezar, H., and Rubin, D., 2004, Underwater microscope system: U.S. Patent and Trademark Office, patent number 6,680,795, January 20, 2004. Rubin, D.M., 2004, A simple autocorrelation algorithm for determining grain size from digital images of sediment: Journal of Sedimentary Research, v. 74, p. 160-165. Rubin, D.M., Chezar, H., Harney, J.N., Topping, D.J., Melis, T.S., and Sherwood, C.R., 2006, Underwater microscope for measuring spatial and temporal changes in bed-sediment grain size: USGS Open-File Report 2006-1360.

  20. Novel wavelength diversity technique for high-speed atmospheric turbulence compensation

    NASA Astrophysics Data System (ADS)

    Arrasmith, William W.; Sullivan, Sean F.

    2010-04-01

    The defense, intelligence, and homeland security communities are driving a need for software dominant, real-time or near-real time atmospheric turbulence compensated imagery. The development of parallel processing capabilities are finding application in diverse areas including image processing, target tracking, pattern recognition, and image fusion to name a few. A novel approach to the computationally intensive case of software dominant optical and near infrared imaging through atmospheric turbulence is addressed in this paper. Previously, the somewhat conventional wavelength diversity method has been used to compensate for atmospheric turbulence with great success. We apply a new correlation based approach to the wavelength diversity methodology using a parallel processing architecture enabling high speed atmospheric turbulence compensation. Methods for optical imaging through distributed turbulence are discussed, simulation results are presented, and computational and performance assessments are provided.

  1. Ultrashort Microwave-Pumped Real-Time Thermoacoustic Breast Tumor Imaging System.

    PubMed

    Ye, Fanghao; Ji, Zhong; Ding, Wenzheng; Lou, Cunguang; Yang, Sihua; Xing, Da

    2016-03-01

    We report the design of a real-time thermoacoustic (TA) scanner dedicated to imaging deep breast tumors and investigate its imaging performance. The TA imaging system is composed of an ultrashort microwave pulse generator and a ring transducer array with 384 elements. By vertically scanning the transducer array that encircles the breast phantom, we achieve real-time, 3D thermoacoustic imaging (TAI) with an imaging speed of 16.7 frames per second. The stability of the microwave energy and its distribution in the cling-skin acoustic coupling cup are measured. The results indicate that there is a nearly uniform electromagnetic field in each XY-imaging plane. Three plastic tubes filled with salt water are imaged dynamically to evaluate the real-time performance of our system, followed by 3D imaging of an excised breast tumor embedded in a breast phantom. Finally, to demonstrate the potential for clinical applications, the excised breast of a ewe embedded with an ex vivo human breast tumor is imaged clearly with a contrast of about 1:2.8. The high imaging speed, large field of view, and 3D imaging performance of our dedicated TAI system provide the potential for clinical routine breast screening.

  2. Real-time Enhancement, Registration, and Fusion for an Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.

    2006-01-01

    Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than-human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests.

  3. GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method

    PubMed Central

    Kim, Byungyeon; Park, Byungjun; Lee, Seungrag; Won, Youngjae

    2016-01-01

    We demonstrated GPU accelerated real-time confocal fluorescence lifetime imaging microscopy (FLIM) based on the analog mean-delay (AMD) method. Our algorithm was verified for various fluorescence lifetimes and photon numbers. The GPU processing time was faster than the physical scanning time for images up to 800 × 800, and more than 149 times faster than a single core CPU. The frame rate of our system was demonstrated to be 13 fps for a 200 × 200 pixel image when observing maize vascular tissue. This system can be utilized for observing dynamic biological reactions, medical diagnosis, and real-time industrial inspection. PMID:28018724

  4. Rapid Diagnosis of Tuberculosis by Real-Time High-Resolution Imaging of Mycobacterium tuberculosis Colonies.

    PubMed

    Ghodbane, Ramzi; Asmar, Shady; Betzner, Marlena; Linet, Marie; Pierquin, Joseph; Raoult, Didier; Drancourt, Michel

    2015-08-01

    Culture remains the cornerstone of diagnosis for pulmonary tuberculosis, but the fastidiousness of Mycobacterium tuberculosis may delay culture-based diagnosis for weeks. We evaluated the performance of real-time high-resolution imaging for the rapid detection of M. tuberculosis colonies growing on a solid medium. A total of 50 clinical specimens, including 42 sputum specimens, 4 stool specimens, 2 bronchoalveolar lavage fluid specimens, and 2 bronchial aspirate fluid specimens were prospectively inoculated into (i) a commercially available Middlebrook broth and evaluated for mycobacterial growth indirectly detected by measuring oxygen consumption (standard protocol) and (ii) a home-made solid medium incubated in an incubator featuring real-time high-resolution imaging of colonies (real-time protocol). Isolates were identified by Ziehl-Neelsen staining and matrix-assisted laser desorption ionization-time of flight mass spectrometry. Use of the standard protocol yielded 14/50 (28%) M. tuberculosis isolates, which is not significantly different from the 13/50 (26%) M. tuberculosis isolates found using the real-time protocol (P = 1.00 by Fisher's exact test), and the contamination rate of 1/50 (2%) was not significantly different from the contamination rate of 2/50 (4%) using the real-time protocol (P = 1.00). The real-time imaging protocol showed a 4.4-fold reduction in time to detection, 82 ± 54 h versus 360 ± 142 h (P < 0.05). These preliminary data give the proof of concept that real-time high-resolution imaging of M. tuberculosis colonies is a new technology that shortens the time to growth detection and the laboratory diagnosis of pulmonary tuberculosis. Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  5. Virtual guidance as a tool to obtain diagnostic ultrasound for spaceflight and remote environments.

    PubMed

    Martin, David S; Caine, Timothy L; Matz, Timothy; Lee, Stuart M C; Stenger, Michael B; Sargsyan, Ashot E; Platts, Steven H

    2012-10-01

    With missions planned to travel greater distances from Earth at ranges that make real-time two-way communication impractical, astronauts will be required to perform autonomous medical diagnostic procedures during future exploration missions. Virtual guidance is a form of just-in-time training developed to allow novice ultrasound operators to acquire diagnostically-adequate images of clinically relevant anatomical structures using a prerecorded audio/visual tutorial viewed in real-time. Individuals without previous experience in ultrasound were recruited to perform carotid artery (N = 10) and ophthalmic (N = 9) ultrasound examinations using virtual guidance as their only training tool. In the carotid group, each untrained operator acquired two-dimensional, pulsed and color Doppler of the carotid artery. In the ophthalmic group, operators acquired representative images of the anterior chamber of the eye, retina, optic nerve, and nerve sheath. Ultrasound image quality was evaluated by independent imaging experts. Of the studies, 8 of the 10 carotid and 17 of 18 of the ophthalmic images (2 images collected per study) were judged to be diagnostically adequate. The quality of all but one of the ophthalmic images ranged from adequate to excellent. Diagnostically-adequate carotid and ophthalmic ultrasound examinations can be obtained by previously untrained operators with assistance from only an audio/video tutorial viewed in real time while scanning. This form of just-in-time training, which can be applied to other examinations, represents an opportunity to acquire important information for NASA flight surgeons and researchers when trained medical personnel are not available or when remote guidance is impractical.

  6. Real-Time Three-Dimensional Cell Segmentation in Large-Scale Microscopy Data of Developing Embryos.

    PubMed

    Stegmaier, Johannes; Amat, Fernando; Lemon, William C; McDole, Katie; Wan, Yinan; Teodoro, George; Mikut, Ralf; Keller, Philipp J

    2016-01-25

    We present the Real-time Accurate Cell-shape Extractor (RACE), a high-throughput image analysis framework for automated three-dimensional cell segmentation in large-scale images. RACE is 55-330 times faster and 2-5 times more accurate than state-of-the-art methods. We demonstrate the generality of RACE by extracting cell-shape information from entire Drosophila, zebrafish, and mouse embryos imaged with confocal and light-sheet microscopes. Using RACE, we automatically reconstructed cellular-resolution tissue anisotropy maps across developing Drosophila embryos and quantified differences in cell-shape dynamics in wild-type and mutant embryos. We furthermore integrated RACE with our framework for automated cell lineaging and performed joint segmentation and cell tracking in entire Drosophila embryos. RACE processed these terabyte-sized datasets on a single computer within 1.4 days. RACE is easy to use, as it requires adjustment of only three parameters, takes full advantage of state-of-the-art multi-core processors and graphics cards, and is available as open-source software for Windows, Linux, and Mac OS. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Real-time Avatar Animation from a Single Image.

    PubMed

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  8. Real-time Avatar Animation from a Single Image

    PubMed Central

    Saragih, Jason M.; Lucey, Simon; Cohn, Jeffrey F.

    2014-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user’s facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters. PMID:24598812

  9. Terrain modeling for real-time simulation

    NASA Astrophysics Data System (ADS)

    Devarajan, Venkat; McArthur, Donald E.

    1993-10-01

    There are many applications, such as pilot training, mission rehearsal, and hardware-in-the- loop simulation, which require the generation of realistic images of terrain and man-made objects in real-time. One approach to meeting this requirement is to drape photo-texture over a planar polygon model of the terrain. The real time system then computes, for each pixel of the output image, the address in a texture map based on the intersection of the line-of-sight vector with the terrain model. High quality image generation requires that the terrain be modeled with a fine mesh of polygons while hardware costs limit the number of polygons which may be displayed for each scene. The trade-off between these conflicting requirements must be made in real-time because it depends on the changing position and orientation of the pilot's eye point or simulated sensor. The traditional approach is to develop a data base consisting of multiple levels of detail (LOD), and then selecting for display LODs as a function of range. This approach could lead to both anomalies in the displayed scene and inefficient use of resources. An approach has been developed in which the terrain is modeled with a set of nested polygons and organized as a tree with each node corresponding to a polygon. This tree is pruned to select the optimum set of nodes for each eye-point position. As the point of view moves, the visibility of some nodes drops below the limit of perception and may be deleted while new points must be added in regions near the eye point. An analytical model has been developed to determine the number of polygons required for display. This model leads to quantitative performance measures of the triangulation algorithm which is useful for optimizing system performance with a limited display capability.

  10. Real-time Enhanced Vision System

    NASA Technical Reports Server (NTRS)

    Hines, Glenn D.; Rahman, Zia-Ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-01-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  11. Real-time enhanced vision system

    NASA Astrophysics Data System (ADS)

    Hines, Glenn D.; Rahman, Zia-ur; Jobson, Daniel J.; Woodell, Glenn A.; Harrah, Steven D.

    2005-05-01

    Flying in poor visibility conditions, such as rain, snow, fog or haze, is inherently dangerous. However these conditions can occur at nearly any location, so inevitably pilots must successfully navigate through them. At NASA Langley Research Center (LaRC), under support of the Aviation Safety and Security Program Office and the Systems Engineering Directorate, we are developing an Enhanced Vision System (EVS) that combines image enhancement and synthetic vision elements to assist pilots flying through adverse weather conditions. This system uses a combination of forward-looking infrared and visible sensors for data acquisition. A core function of the system is to enhance and fuse the sensor data in order to increase the information content and quality of the captured imagery. These operations must be performed in real-time for the pilot to use while flying. For image enhancement, we are using the LaRC patented Retinex algorithm since it performs exceptionally well for improving low-contrast range imagery typically seen during poor visibility poor visibility conditions. In general, real-time operation of the Retinex requires specialized hardware. To date, we have successfully implemented a single-sensor real-time version of the Retinex on several different Digital Signal Processor (DSP) platforms. In this paper we give an overview of the EVS and its performance requirements for real-time enhancement and fusion and we discuss our current real-time Retinex implementations on DSPs.

  12. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  13. Image processing for improved eye-tracking accuracy

    NASA Technical Reports Server (NTRS)

    Mulligan, J. B.; Watson, A. B. (Principal Investigator)

    1997-01-01

    Video cameras provide a simple, noninvasive method for monitoring a subject's eye movements. An important concept is that of the resolution of the system, which is the smallest eye movement that can be reliably detected. While hardware systems are available that estimate direction of gaze in real-time from a video image of the pupil, such systems must limit image processing to attain real-time performance and are limited to a resolution of about 10 arc minutes. Two ways to improve resolution are discussed. The first is to improve the image processing algorithms that are used to derive an estimate. Off-line analysis of the data can improve resolution by at least one order of magnitude for images of the pupil. A second avenue by which to improve resolution is to increase the optical gain of the imaging setup (i.e., the amount of image motion produced by a given eye rotation). Ophthalmoscopic imaging of retinal blood vessels provides increased optical gain and improved immunity to small head movements but requires a highly sensitive camera. The large number of images involved in a typical experiment imposes great demands on the storage, handling, and processing of data. A major bottleneck had been the real-time digitization and storage of large amounts of video imagery, but recent developments in video compression hardware have made this problem tractable at a reasonable cost. Images of both the retina and the pupil can be analyzed successfully using a basic toolbox of image-processing routines (filtering, correlation, thresholding, etc.), which are, for the most part, well suited to implementation on vectorizing supercomputers.

  14. Facial Expression Presentation for Real-Time Internet Communication

    NASA Astrophysics Data System (ADS)

    Dugarry, Alexandre; Berrada, Aida; Fu, Shan

    2003-01-01

    Text, voice and video images are the most common forms of media content for instant communication on the Internet. Studies have shown that facial expressions convey much richer information than text and voice during a face-to-face conversation. The currently available real time means of communication (instant text messages, chat programs and videoconferencing), however, have major drawbacks in terms of exchanging facial expression. The first two means do not involve the image transmission, whilst video conferencing requires a large bandwidth that is not always available, and the transmitted image sequence is neither smooth nor without delay. The objective of the work presented here is to develop a technique that overcomes these limitations, by extracting the facial expression of speakers and to realise real-time communication. In order to get the facial expressions, the main characteristics of the image are emphasized. Interpolation is performed on edge points previously detected to create geometric shapes such as arcs, lines, etc. The regional dominant colours of the pictures are also extracted and the combined results are subsequently converted into Scalable Vector Graphics (SVG) format. The application based on the proposed technique aims at being used simultaneously with chat programs and being able to run on any platform.

  15. Binocular Goggle Augmented Imaging and Navigation System provides real-time fluorescence image guidance for tumor resection and sentinel lymph node mapping

    PubMed Central

    B. Mondal, Suman; Gao, Shengkui; Zhu, Nan; Sudlow, Gail P.; Liang, Kexian; Som, Avik; Akers, Walter J.; Fields, Ryan C.; Margenthaler, Julie; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel

    2015-01-01

    The inability to identify microscopic tumors and assess surgical margins in real-time during oncologic surgery leads to incomplete tumor removal, increases the chances of tumor recurrence, and necessitates costly repeat surgery. To overcome these challenges, we have developed a wearable goggle augmented imaging and navigation system (GAINS) that can provide accurate intraoperative visualization of tumors and sentinel lymph nodes in real-time without disrupting normal surgical workflow. GAINS projects both near-infrared fluorescence from tumors and the natural color images of tissue onto a head-mounted display without latency. Aided by tumor-targeted contrast agents, the system detected tumors in subcutaneous and metastatic mouse models with high accuracy (sensitivity = 100%, specificity = 98% ± 5% standard deviation). Human pilot studies in breast cancer and melanoma patients using a near-infrared dye show that the GAINS detected sentinel lymph nodes with 100% sensitivity. Clinical use of the GAINS to guide tumor resection and sentinel lymph node mapping promises to improve surgical outcomes, reduce rates of repeat surgery, and improve the accuracy of cancer staging. PMID:26179014

  16. A real time quality control application for animal production by image processing.

    PubMed

    Sungur, Cemil; Özkan, Halil

    2015-11-01

    Standards of hygiene and health are of major importance in food production, and quality control has become obligatory in this field. Thanks to rapidly developing technologies, it is now possible for automatic and safe quality control of food production. For this purpose, image-processing-based quality control systems used in industrial applications are being employed to analyze the quality of food products. In this study, quality control of chicken (Gallus domesticus) eggs was achieved using a real time image-processing technique. In order to execute the quality control processes, a conveying mechanism was used. Eggs passing on a conveyor belt were continuously photographed in real time by cameras located above the belt. The images obtained were processed by various methods and techniques. Using digital instrumentation, the volume of the eggs was measured, broken/cracked eggs were separated and dirty eggs were determined. In accordance with international standards for classifying the quality of eggs, the class of separated eggs was determined through a fuzzy implication model. According to tests carried out on thousands of eggs, a quality control process with an accuracy of 98% was possible. © 2014 Society of Chemical Industry.

  17. DSP Implementation of the Retinex Image Enhancement Algorithm

    NASA Technical Reports Server (NTRS)

    Hines, Glenn; Rahman, Zia-Ur; Jobson, Daniel; Woodell, Glenn

    2004-01-01

    The Retinex is a general-purpose image enhancement algorithm that is used to produce good visual representations of scenes. It performs a non-linear spatial/spectral transform that synthesizes strong local contrast enhancement and color constancy. A real-time, video frame rate implementation of the Retinex is required to meet the needs of various potential users. Retinex processing contains a relatively large number of complex computations, thus to achieve real-time performance using current technologies requires specialized hardware and software. In this paper we discuss the design and development of a digital signal processor (DSP) implementation of the Retinex. The target processor is a Texas Instruments TMS320C6711 floating point DSP. NTSC video is captured using a dedicated frame-grabber card, Retinex processed, and displayed on a standard monitor. We discuss the optimizations used to achieve real-time performance of the Retinex and also describe our future plans on using alternative architectures.

  18. Fluorescence-Raman Dual Modal Endoscopic System for Multiplexed Molecular Diagnostics

    NASA Astrophysics Data System (ADS)

    Jeong, Sinyoung; Kim, Yong-Il; Kang, Homan; Kim, Gunsung; Cha, Myeong Geun; Chang, Hyejin; Jung, Kyung Oh; Kim, Young-Hwa; Jun, Bong-Hyun; Hwang, Do Won; Lee, Yun-Sang; Youn, Hyewon; Lee, Yoon-Sik; Kang, Keon Wook; Lee, Dong Soo; Jeong, Dae Hong

    2015-03-01

    Optical endoscopic imaging, which was recently equipped with bioluminescence, fluorescence, and Raman scattering, allows minimally invasive real-time detection of pathologies on the surface of hollow organs. To characterize pathologic lesions in a multiplexed way, we developed a dual modal fluorescence-Raman endomicroscopic system (FRES), which used fluorescence and surface-enhanced Raman scattering nanoprobes (F-SERS dots). Real-time, in vivo, and multiple target detection of a specific cancer was successful, based on the fast imaging capability of fluorescence signals and the multiplex capability of simultaneously detected SERS signals using an optical fiber bundle for intraoperative endoscopic system. Human epidermal growth factor receptor 2 (HER2) and epidermal growth factor receptor (EGFR) on the breast cancer xenografts in a mouse orthotopic model were successfully detected in a multiplexed way, illustrating the potential of FRES as a molecular diagnostic instrument that enables real-time tumor characterization of receptors during routine endoscopic procedures.

  19. Detecting tympanostomy tubes from otoscopic images via offline and online training.

    PubMed

    Wang, Xin; Valdez, Tulio A; Bi, Jinbo

    2015-06-01

    Tympanostomy tube placement has been commonly used nowadays as a surgical treatment for otitis media. Following the placement, regular scheduled follow-ups for checking the status of the tympanostomy tubes are important during the treatment. The complexity of performing the follow up care mainly lies on identifying the presence and patency of the tympanostomy tube. An automated tube detection program will largely reduce the care costs and enhance the clinical efficiency of the ear nose and throat specialists and general practitioners. In this paper, we develop a computer vision system that is able to automatically detect a tympanostomy tube in an otoscopic image of the ear drum. The system comprises an offline classifier training process followed by a real-time refinement stage performed at the point of care. The offline training process constructs a three-layer cascaded classifier with each layer reflecting specific characteristics of the tube. The real-time refinement process enables the end users to interact and adjust the system over time based on their otoscopic images and patient care. The support vector machine (SVM) algorithm has been applied to train all of the classifiers. Empirical evaluation of the proposed system on both high quality hospital images and low quality internet images demonstrates the effectiveness of the system. The offline classifier trained using 215 images could achieve a 90% accuracy in terms of classifying otoscopic images with and without a tympanostomy tube, and then the real-time refinement process could improve the classification accuracy by 3-5% based on additional 20 images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Augmented microscopy with near-infrared fluorescence detection

    NASA Astrophysics Data System (ADS)

    Watson, Jeffrey R.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael; Anton, Rein; Romanowski, Marek

    2015-03-01

    Near-infrared (NIR) fluorescence has become a frequently used intraoperative technique for image-guided surgical interventions. In procedures such as cerebral angiography, surgeons use the optical surgical microscope for the color view of the surgical field, and then switch to an electronic display for the NIR fluorescence images. However, the lack of stereoscopic, real-time, and on-site coregistration adds time and uncertainty to image-guided surgical procedures. To address these limitations, we developed the augmented microscope, whereby the electronically processed NIR fluorescence image is overlaid with the anatomical optical image in real-time within the optical path of the microscope. In vitro, the augmented microscope can detect and display indocyanine green (ICG) concentrations down to 94.5 nM, overlaid with the anatomical color image. We prepared polyacrylamide tissue phantoms with embedded polystyrene beads, yielding scattering properties similar to brain matter. In this model, 194 μM solution of ICG was detectable up to depths of 5 mm. ICG angiography was then performed in anesthetized rats. A dynamic process of ICG distribution in the vascular system overlaid with anatomical color images was observed and recorded. In summary, the augmented microscope demonstrates NIR fluorescence detection with superior real-time coregistration displayed within the ocular of the stereomicroscope. In comparison to other techniques, the augmented microscope retains full stereoscopic vision and optical controls including magnification and focus, camera capture, and multiuser access. Augmented microscopy may find application in surgeries where the use of traditional microscopes can be enhanced by contrast agents and image guided delivery of therapeutics, including oncology, neurosurgery, and ophthalmology.

  1. External optical imaging of freely moving mice with green fluorescent protein-expressing metastatic tumors

    NASA Astrophysics Data System (ADS)

    Yang, Meng; Baranov, Eugene; Shimada, Hiroshi; Moossa, A. R.; Hoffman, Robert M.

    2000-04-01

    We report here a new approach to genetically engineering tumors to become fluorescence such that they can be imaged externally in freely-moving animals. We describe here external high-resolution real-time fluorescent optical imaging of metastatic tumors in live mice. Stable high-level green flourescent protein (GFP)-expressing human and rodent cell lines enable tumors and metastasis is formed from them to be externally imaged from freely-moving mice. Real-time tumor and metastatic growth were quantitated from whole-body real-time imaging in GFP-expressing melanoma and colon carcinoma models. This GFP optical imaging system is highly appropriate for high throughput in vivo drug screening.

  2. Real-time myocardium segmentation for the assessment of cardiac function variation

    NASA Astrophysics Data System (ADS)

    Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja

    2017-03-01

    Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.

  3. Stent deployment protocol for optimized real-time visualization during endovascular neurosurgery.

    PubMed

    Silva, Michael A; See, Alfred P; Dasenbrock, Hormuzdiyar H; Ashour, Ramsey; Khandelwal, Priyank; Patel, Nirav J; Frerichs, Kai U; Aziz-Sultan, Mohammad A

    2017-05-01

    Successful application of endovascular neurosurgery depends on high-quality imaging to define the pathology and the devices as they are being deployed. This is especially challenging in the treatment of complex cases, particularly in proximity to the skull base or in patients who have undergone prior endovascular treatment. The authors sought to optimize real-time image guidance using a simple algorithm that can be applied to any existing fluoroscopy system. Exposure management (exposure level, pulse management) and image post-processing parameters (edge enhancement) were modified from traditional fluoroscopy to improve visualization of device position and material density during deployment. Examples include the deployment of coils in small aneurysms, coils in giant aneurysms, the Pipeline embolization device (PED), the Woven EndoBridge (WEB) device, and carotid artery stents. The authors report on the development of the protocol and their experience using representative cases. The stent deployment protocol is an image capture and post-processing algorithm that can be applied to existing fluoroscopy systems to improve real-time visualization of device deployment without hardware modifications. Improved image guidance facilitates aneurysm coil packing and proper positioning and deployment of carotid artery stents, flow diverters, and the WEB device, especially in the context of complex anatomy and an obscured field of view.

  4. Completely automated estimation of prostate volume for 3-D side-fire transrectal ultrasound using shape prior approach

    NASA Astrophysics Data System (ADS)

    Li, Lu; Narayanan, Ramakrishnan; Miller, Steve; Shen, Feimo; Barqawi, Al B.; Crawford, E. David; Suri, Jasjit S.

    2008-02-01

    Real-time knowledge of capsule volume of an organ provides a valuable clinical tool for 3D biopsy applications. It is challenging to estimate this capsule volume in real-time due to the presence of speckles, shadow artifacts, partial volume effect and patient motion during image scans, which are all inherent in medical ultrasound imaging. The volumetric ultrasound prostate images are sliced in a rotational manner every three degrees. The automated segmentation method employs a shape model, which is obtained from training data, to delineate the middle slices of volumetric prostate images. Then a "DDC" algorithm is applied to the rest of the images with the initial contour obtained. The volume of prostate is estimated with the segmentation results. Our database consists of 36 prostate volumes which are acquired using a Philips ultrasound machine using a Side-fire transrectal ultrasound (TRUS) probe. We compare our automated method with the semi-automated approach. The mean volumes using the semi-automated and complete automated techniques were 35.16 cc and 34.86 cc, with the error of 7.3% and 7.6% compared to the volume obtained by the human estimated boundary (ideal boundary), respectively. The overall system, which was developed using Microsoft Visual C++, is real-time and accurate.

  5. In-line monitoring of pellet coating thickness growth by means of visual imaging.

    PubMed

    Oman Kadunc, Nika; Sibanc, Rok; Dreu, Rok; Likar, Boštjan; Tomaževič, Dejan

    2014-08-15

    Coating thickness is the most important attribute of coated pharmaceutical pellets as it directly affects release profiles and stability of the drug. Quality control of the coating process of pharmaceutical pellets is thus of utmost importance for assuring the desired end product characteristics. A visual imaging technique is presented and examined as a process analytic technology (PAT) tool for noninvasive continuous in-line and real time monitoring of coating thickness of pharmaceutical pellets during the coating process. Images of pellets were acquired during the coating process through an observation window of a Wurster coating apparatus. Image analysis methods were developed for fast and accurate determination of pellets' coating thickness during a coating process. The accuracy of the results for pellet coating thickness growth obtained in real time was evaluated through comparison with an off-line reference method and a good agreement was found. Information about the inter-pellet coating uniformity was gained from further statistical analysis of the measured pellet size distributions. Accuracy and performance analysis of the proposed method showed that visual imaging is feasible as a PAT tool for in-line and real time monitoring of the coating process of pharmaceutical pellets. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography

    NASA Technical Reports Server (NTRS)

    Qin, J. X.; Shiota, T.; Thomas, J. D.

    2000-01-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  7. Determination of left ventricular volume, ejection fraction, and myocardial mass by real-time three-dimensional echocardiography.

    PubMed

    Qin, J X; Shiota, T; Thomas, J D

    2000-11-01

    Reconstructed three-dimensional (3-D) echocardiography is an accurate and reproducible method of assessing left ventricular (LV) functions. However, it has limitations for clinical study due to the requirement of complex computer and echocardiographic analysis systems, electrocardiographic/respiratory gating, and prolonged imaging times. Real-time 3-D echocardiography has a major advantage of conveniently visualizing the entire cardiac anatomy in three dimensions and of potentially accurately quantifying LV volumes, ejection fractions, and myocardial mass in patients even in the presence of an LV aneurysm. Although the image quality of the current real-time 3-D echocardiographic methods is not optimal, its widespread clinical application is possible because of the convenient and fast image acquisition. We review real-time 3-D echocardiographic image acquisition and quantitative analysis for the evaluation of LV function and LV mass.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, H; Chen, Z; Nath, R

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertaintymore » through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the tumor is within the margin or initialize motion compensation if it is out of the margin.« less

  9. Artificial Intelligence in Autonomous Telescopes

    NASA Astrophysics Data System (ADS)

    Mahoney, William; Thanjavur, Karun

    2011-03-01

    Artificial Intelligence (AI) is key to the natural evolution of today's automated telescopes to fully autonomous systems. Based on its rapid development over the past five decades, AI offers numerous, well-tested techniques for knowledge based decision making essential for real-time telescope monitoring and control, with minimal - and eventually no - human intervention. We present three applications of AI developed at CFHT for monitoring instantaneous sky conditions, assessing quality of imaging data, and a prototype for scheduling observations in real-time. Closely complementing the current remote operations at CFHT, we foresee further development of these methods and full integration in the near future.

  10. Real-Time Internet Connections: Implications for Surgical Decision Making in Laparoscopy

    PubMed Central

    Broderick, Timothy J.; Harnett, Brett M.; Doarn, Charles R.; Rodas, Edgar B.; Merrell, Ronald C.

    2001-01-01

    Objective To determine whether a low-bandwidth Internet connection can provide adequate image quality to support remote real-time surgical consultation. Summary Background Data Telemedicine has been used to support care at a distance through the use of expensive equipment and broadband communication links. In the past, the operating room has been an isolated environment that has been relatively inaccessible for real-time consultation. Recent technological advances have permitted videoconferencing over low-bandwidth, inexpensive Internet connections. If these connections are shown to provide adequate video quality for surgical applications, low-bandwidth telemedicine will open the operating room environment to remote real-time surgical consultation. Methods Surgeons performing a laparoscopic cholecystectomy in Ecuador or the Dominican Republic shared real-time laparoscopic images with a panel of surgeons at the parent university through a dial-up Internet account. The connection permitted video and audio teleconferencing to support real-time consultation as well as the transmission of real-time images and store-and-forward images for observation by the consultant panel. A total of six live consultations were analyzed. In addition, paired local and remote images were “grabbed” from the video feed during these laparoscopic cholecystectomies. Nine of these paired images were then placed into a Web-based tool designed to evaluate the effect of transmission on image quality. Results The authors showed for the first time the ability to identify critical anatomic structures in laparoscopy over a low-bandwidth connection via the Internet. The consultant panel of surgeons correctly remotely identified biliary and arterial anatomy during six laparoscopic cholecystectomies. Within the Web-based questionnaire, 15 surgeons could not blindly distinguish the quality of local and remote laparoscopic images. Conclusions Low-bandwidth, Internet-based telemedicine is inexpensive, effective, and almost ubiquitous. Use of these inexpensive, portable technologies will allow sharing of surgical procedures and decisions regardless of location. Internet telemedicine consistently supported real-time intraoperative consultation in laparoscopic surgery. The implications are broad with respect to quality improvement and diffusion of knowledge as well as for basic consultation. PMID:11505061

  11. A Review on Real-Time 3D Ultrasound Imaging Technology

    PubMed Central

    Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail. PMID:28459067

  12. A Review on Real-Time 3D Ultrasound Imaging Technology.

    PubMed

    Huang, Qinghua; Zeng, Zhaozheng

    2017-01-01

    Real-time three-dimensional (3D) ultrasound (US) has attracted much more attention in medical researches because it provides interactive feedback to help clinicians acquire high-quality images as well as timely spatial information of the scanned area and hence is necessary in intraoperative ultrasound examinations. Plenty of publications have been declared to complete the real-time or near real-time visualization of 3D ultrasound using volumetric probes or the routinely used two-dimensional (2D) probes. So far, a review on how to design an interactive system with appropriate processing algorithms remains missing, resulting in the lack of systematic understanding of the relevant technology. In this article, previous and the latest work on designing a real-time or near real-time 3D ultrasound imaging system are reviewed. Specifically, the data acquisition techniques, reconstruction algorithms, volume rendering methods, and clinical applications are presented. Moreover, the advantages and disadvantages of state-of-the-art approaches are discussed in detail.

  13. MO-DE-BRA-04: Hands-On Fluoroscopy Safety Training with Real-Time Patient and Staff Dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vanderhoek, M; Bevins, N

    Purpose: Fluoroscopically guided interventions (FGI) are routinely performed across many different hospital departments. However, many involved staff members have minimal training regarding safe and optimal use of fluoroscopy systems. We developed and taught a hands-on fluoroscopy safety class incorporating real-time patient and staff dosimetry in order to promote safer and more optimal use of fluoroscopy during FGI. Methods: The hands-on fluoroscopy safety class is taught in an FGI suite, unique to each department. A patient equivalent phantom is set on the patient table with an ion chamber positioned at the x-ray beam entrance to the phantom. This provides a surrogatemore » measure of patient entrance dose. Multiple solid state dosimeters (RaySafe i2 dosimetry systemTM) are deployed at different distances from the phantom (0.1, 1, 3 meters), which provide surrogate measures of staff dose. Instructors direct participating clinical staff to operate the fluoroscopy system as they view live fluoroscopic images, patient entrance dose, and staff doses in real-time. During class, instructors work with clinical staff to investigate how patient entrance dose, staff doses, and image quality are affected by different parameters, including pulse rate, magnification, collimation, beam angulation, imaging mode, system geometry, distance, and shielding. Results: Real-time dose visualization enables clinical staff to directly see and learn how to optimize their use of their own fluoroscopy system to minimize patient and staff dose, yet maintain sufficient image quality for FGI. As a direct result of the class, multiple hospital departments have implemented changes to their imaging protocols, including reduction of the default fluoroscopy pulse rate and increased use of collimation and lower dose fluoroscopy modes. Conclusion: Hands-on fluoroscopy safety training substantially benefits from real-time patient and staff dosimetry incorporated into the class. Real-time dose display helps clinical staff visualize, internalize, and ultimately utilize the safety techniques learned during the training. RaySafe/Unfors/Fluke lent us a portable version of their RaySafe i2 Dosimetry System for 6 months.« less

  14. High-resolution ultrasound imaging and noninvasive optoacoustic monitoring of blood variables in peripheral blood vessels

    NASA Astrophysics Data System (ADS)

    Petrov, Irene Y.; Petrov, Yuriy; Prough, Donald S.; Esenaliev, Rinat O.

    2011-03-01

    Ultrasound imaging is being widely used in clinics to obtain diagnostic information non-invasively and in real time. A high-resolution ultrasound imaging platform, Vevo (VisualSonics, Inc.) provides in vivo, real-time images with exceptional resolution (up to 30 microns) using high-frequency transducers (up to 80 MHz). Recently, we built optoacoustic systems for probing radial artery and peripheral veins that can be used for noninvasive monitoring of total hemoglobin concentration, oxyhemoglobin saturation, and concentration of important endogenous and exogenous chromophores (such as ICG). In this work we used the high-resolution ultrasound imaging system Vevo 770 for visualization of the radial artery and peripheral veins and acquired corresponding optoacoustic signals from them using the optoacoustic systems. Analysis of the optoacoustic data with a specially developed algorithm allowed for measurement of blood oxygenation in the blood vessels as well as for continuous, real-time monitoring of arterial and venous blood oxygenation. Our results indicate that: 1) the optoacoustic technique (unlike pure optical approaches and other noninvasive techniques) is capable of accurate peripheral venous oxygenation measurement; and 2) peripheral venous oxygenation is dependent on skin temperature and local hemodynamics. Moreover, we performed for the first time (to the best of our knowledge) a comparative study of optoacoustic arterial oximetry and a standard pulse oximeter in humans and demonstrated superior performance of the optoacoustic arterial oximeter, in particular at low blood flow.

  15. Image denoising for real-time MRI.

    PubMed

    Klosowski, Jakob; Frahm, Jens

    2017-03-01

    To develop an image noise filter suitable for MRI in real time (acquisition and display), which preserves small isolated details and efficiently removes background noise without introducing blur, smearing, or patch artifacts. The proposed method extends the nonlocal means algorithm to adapt the influence of the original pixel value according to a simple measure for patch regularity. Detail preservation is improved by a compactly supported weighting kernel that closely approximates the commonly used exponential weight, while an oracle step ensures efficient background noise removal. Denoising experiments were conducted on real-time images of healthy subjects reconstructed by regularized nonlinear inversion from radial acquisitions with pronounced undersampling. The filter leads to a signal-to-noise ratio (SNR) improvement of at least 60% without noticeable artifacts or loss of detail. The method visually compares to more complex state-of-the-art filters as the block-matching three-dimensional filter and in certain cases better matches the underlying noise model. Acceleration of the computation to more than 100 complex frames per second using graphics processing units is straightforward. The sensitivity of nonlocal means to small details can be significantly increased by the simple strategies presented here, which allows partial restoration of SNR in iteratively reconstructed images without introducing a noticeable time delay or image artifacts. Magn Reson Med 77:1340-1352, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    NASA Astrophysics Data System (ADS)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. Continuous Rapid Quantification of Stroke Volume Using Magnetohydrodynamic Voltages in 3T Magnetic Resonance Imaging.

    PubMed

    Gregory, T Stan; Oshinski, John; Schmidt, Ehud J; Kwong, Raymond Y; Stevenson, William G; Ho Tse, Zion Tsz

    2015-12-01

    To develop a technique to noninvasively estimate stroke volume in real time during magnetic resonance imaging (MRI)-guided procedures, based on induced magnetohydrodynamic voltages (VMHD) that occur in ECG recordings during MRI exams, leaving the MRI scanner free to perform other imaging tasks. Because of the relationship between blood flow (BF) and VMHD, we hypothesized that a method to obtain stroke volume could be derived from extracted VMHD vectors in the vectorcardiogram (VCG) frame of reference (VMHDVCG). To estimate a subject-specific BF-VMHD model, VMHDVCG was acquired during a 20-s breath-hold and calibrated versus aortic BF measured using phase-contrast magnetic resonance in 10 subjects (n=10) and 1 subject diagnosed with premature ventricular contractions. Beat-to-beat validation of VMHDVCG-derived BF was performed using real-time phase-contrast imaging in 7 healthy subjects (n=7) during 15-minute cardiac exercise stress tests and 30 minutes after stress relaxation in 3T MRIs. Subject-specific equations were derived to correlate VMHDVCG with BF at rest and validated using real-time phase-contrast. An average error of 7.22% and 3.69% in stroke volume estimation, respectively, was found during peak stress and after complete relaxation. Measured beat-to-beat BF time history derived from real-time phase-contrast and VMHD was highly correlated using a Spearman rank correlation coefficient during stress tests (0.89) and after stress relaxation (0.86). Accurate beat-to-beat stroke volume and BF were estimated using VMHDVCG extracted from intra-MRI 12-lead ECGs, providing a means to enhance patient monitoring during MR imaging and MR-guided interventions. © 2015 American Heart Association, Inc.

  18. High-sensitivity, real-time, ratiometric imaging of surface-enhanced Raman scattering nanoparticles with a clinically translatable Raman endoscope device.

    PubMed

    Garai, Ellis; Sensarn, Steven; Zavaleta, Cristina L; Van de Sompel, Dominique; Loewke, Nathan O; Mandella, Michael J; Gambhir, Sanjiv S; Contag, Christopher H

    2013-09-01

    Topical application and quantification of targeted, surface-enhanced Raman scattering (SERS) nanoparticles offer a new technique that has the potential for early detection of epithelial cancers of hollow organs. Although less toxic than intravenous delivery, the additional washing required to remove unbound nanoparticles cannot necessarily eliminate nonspecific pooling. Therefore, we developed a real-time, ratiometric imaging technique to determine the relative concentrations of at least two spectrally unique nanoparticle types, where one serves as a nontargeted control. This approach improves the specific detection of bound, targeted nanoparticles by adjusting for working distance and for any nonspecific accumulation following washing. We engineered hardware and software to acquire SERS signals and ratios in real time and display them via a graphical user interface. We report quantitative, ratiometric imaging with nanoparticles at pM and sub-pM concentrations and at varying working distances, up to 50 mm. Additionally, we discuss optimization of a Raman endoscope by evaluating the effects of lens material and fiber coating on background noise, and theoretically modeling and simulating collection efficiency at various working distances. This work will enable the development of a clinically translatable, noncontact Raman endoscope capable of rapidly scanning large, topographically complex tissue surfaces for small and otherwise hard to detect lesions.

  19. Signal-to-noise ratio enhancement on SEM images using a cubic spline interpolation with Savitzky-Golay filters and weighted least squares error.

    PubMed

    Kiani, M A; Sim, K S; Nia, M E; Tso, C P

    2015-05-01

    A new technique based on cubic spline interpolation with Savitzky-Golay smoothing using weighted least squares error filter is enhanced for scanning electron microscope (SEM) images. A diversity of sample images is captured and the performance is found to be better when compared with the moving average and the standard median filters, with respect to eliminating noise. This technique can be implemented efficiently on real-time SEM images, with all mandatory data for processing obtained from a single image. Noise in images, and particularly in SEM images, are undesirable. A new noise reduction technique, based on cubic spline interpolation with Savitzky-Golay and weighted least squares error method, is developed. We apply the combined technique to single image signal-to-noise ratio estimation and noise reduction for SEM imaging system. This autocorrelation-based technique requires image details to be correlated over a few pixels, whereas the noise is assumed to be uncorrelated from pixel to pixel. The noise component is derived from the difference between the image autocorrelation at zero offset, and the estimation of the corresponding original autocorrelation. In the few test cases involving different images, the efficiency of the developed noise reduction filter is proved to be significantly better than those obtained from the other methods. Noise can be reduced efficiently with appropriate choice of scan rate from real-time SEM images, without generating corruption or increasing scanning time. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  20. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2009-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  1. Virtual Diagnostic Interface: Aerospace Experimentation in the Synthetic Environment

    NASA Technical Reports Server (NTRS)

    Schwartz, Richard J.; McCrea, Andrew C.

    2010-01-01

    The Virtual Diagnostics Interface (ViDI) methodology combines two-dimensional image processing and three-dimensional computer modeling to provide comprehensive in-situ visualizations commonly utilized for in-depth planning of wind tunnel and flight testing, real time data visualization of experimental data, and unique merging of experimental and computational data sets in both real-time and post-test analysis. The preparation of such visualizations encompasses the realm of interactive three-dimensional environments, traditional and state of the art image processing techniques, database management and development of toolsets with user friendly graphical user interfaces. ViDI has been under development at the NASA Langley Research Center for over 15 years, and has a long track record of providing unique and insightful solutions to a wide variety of experimental testing techniques and validation of computational simulations. This report will address the various aspects of ViDI and how it has been applied to test programs as varied as NASCAR race car testing in NASA wind tunnels to real-time operations concerning Space Shuttle aerodynamic flight testing. In addition, future trends and applications will be outlined in the paper.

  2. Development of a real-time, high-frequency ultrasound digital beamformer for high-frequency linear array transducers.

    PubMed

    Hu, Chang-Hong; Xu, Xiao-Chen; Cannata, Jonathan M; Yen, Jesse T; Shung, K Kirk

    2006-02-01

    A real-time digital beamformer for high-frequency (>20 MHz) linear ultrasonic arrays has been developed. The system can handle up to 64-element linear array transducers and excite 16 channels and receive simultaneously at 100 MHz sampling frequency with 8-bit precision. Radio frequency (RF) signals are digitized, delayed, and summed through a real-time digital beamformer, which is implemented using a field programmable gate array (FPGA). Using fractional delay filters, fine delays as small as 2 ns can be implemented. A frame rate of 30 frames per second is achieved. Wire phantom (20 microm tungsten) images were obtained and -6 dB axial and lateral widths were measured. The results showed that, using a 30 MHz, 48-element array with a pitch of 100 microm produced a -6 dB width of 68 microm in the axial and 370 microm in the lateral direction at 6.4 mm range. Images from an excised rabbit eye sample also were acquired, and fine anatomical structures, such as the cornea and lens, were resolved.

  3. A Microplate Reader-Based System for Visualizing Transcriptional Activity During in vivo Microbial Interactions in Space and Time.

    PubMed

    Hennessy, Rosanna C; Stougaard, Peter; Olsson, Stefan

    2017-03-21

    Here, we report the development of a microplate reader-based system for visualizing gene expression dynamics in living bacterial cells in response to a fungus in space and real-time. A bacterium expressing the red fluorescent protein mCherry fused to the promoter region of a regulator gene nunF indicating activation of an antifungal secondary metabolite gene cluster was used as a reporter system. Time-lapse image recordings of the reporter red signal and a green signal from fluorescent metabolites combined with microbial growth measurements showed that nunF-regulated gene transcription is switched on when the bacterium enters the deceleration growth phase and upon physical encounter with fungal hyphae. This novel technique enables real-time live imaging of samples by time-series multi-channel automatic recordings using a microplate reader as both an incubator and image recorder of general use to researchers. The technique can aid in deciding when to destructively sample for other methods e.g. transcriptomics and mass spectrometry imaging to study gene expression and metabolites exchanged during the interaction.

  4. In-vivo gingival sulcus imaging using full-range, complex-conjugate-free, endoscopic spectral domain optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Huang, Yong; Zhang, Kang; Yi, WonJin; Kang, Jin U.

    2012-01-01

    Frequent monitoring of gingival sulcus will provide valuable information for judging the presence and severity of periodontal disease. Optical coherence tomography, as a 3D high resolution high speed imaging modality is able to provide information for pocket depth, gum contour, gum texture, gum recession simultaneously. A handheld forward-viewing miniature resonant fiber-scanning probe was developed for in-vivo gingival sulcus imaging. The fiber cantilever driven by magnetic force vibrates at resonant frequency. A synchronized linear phase-modulation was applied in the reference arm by the galvanometer-driven reference mirror. Full-range, complex-conjugate-free, real-time endoscopic SD-OCT was achieved by accelerating the data process using graphics processing unit. Preliminary results showed a real-time in-vivo imaging at 33 fps with an imaging range of lateral 2 mm by depth 3 mm. Gap between the tooth and gum area was clearly visualized. Further quantification analysis of the gingival sulcus will be performed on the image acquired.

  5. Real-time hyperspectral fluorescence imaging of pancreatic β-cell dynamics with the image mapping spectrometer

    PubMed Central

    Elliott, Amicia D.; Gao, Liang; Ustione, Alessandro; Bedard, Noah; Kester, Robert; Piston, David W.; Tkaczyk, Tomasz S.

    2012-01-01

    Summary The development of multi-colored fluorescent proteins, nanocrystals and organic fluorophores, along with the resulting engineered biosensors, has revolutionized the study of protein localization and dynamics in living cells. Hyperspectral imaging has proven to be a useful approach for such studies, but this technique is often limited by low signal and insufficient temporal resolution. Here, we present an implementation of a snapshot hyperspectral imaging device, the image mapping spectrometer (IMS), which acquires full spectral information simultaneously from each pixel in the field without scanning. The IMS is capable of real-time signal capture from multiple fluorophores with high collection efficiency (∼65%) and image acquisition rate (up to 7.2 fps). To demonstrate the capabilities of the IMS in cellular applications, we have combined fluorescent protein (FP)-FRET and [Ca2+]i biosensors to measure simultaneously intracellular cAMP and [Ca2+]i signaling in pancreatic β-cells. Additionally, we have compared quantitatively the IMS detection efficiency with a laser-scanning confocal microscope. PMID:22854044

  6. Real-time intravascular photoacoustic-ultrasound imaging of lipid-laden plaque at speed of video-rate level

    NASA Astrophysics Data System (ADS)

    Hui, Jie; Cao, Yingchun; Zhang, Yi; Kole, Ayeeshik; Wang, Pu; Yu, Guangli; Eakins, Gregory; Sturek, Michael; Chen, Weibiao; Cheng, Ji-Xin

    2017-03-01

    Intravascular photoacoustic-ultrasound (IVPA-US) imaging is an emerging hybrid modality for the detection of lipidladen plaques by providing simultaneous morphological and lipid-specific chemical information of an artery wall. The clinical utility of IVPA-US technology requires real-time imaging and display at speed of video-rate level. Here, we demonstrate a compact and portable IVPA-US system capable of imaging at up to 25 frames per second in real-time display mode. This unprecedented imaging speed was achieved by concurrent innovations in excitation laser source, rotary joint assembly, 1 mm IVPA-US catheter, differentiated A-line strategy, and real-time image processing and display algorithms. By imaging pulsatile motion at different imaging speeds, 16 frames per second was deemed to be adequate to suppress motion artifacts from cardiac pulsation for in vivo applications. Our lateral resolution results further verified the number of A-lines used for a cross-sectional IVPA image reconstruction. The translational capability of this system for the detection of lipid-laden plaques was validated by ex vivo imaging of an atherosclerotic human coronary artery at 16 frames per second, which showed strong correlation to gold-standard histopathology.

  7. Passive lighting responsive three-dimensional integral imaging

    NASA Astrophysics Data System (ADS)

    Lou, Yimin; Hu, Juanmei

    2017-11-01

    A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.

  8. The ultrasound brain helmet: feasibility study of multiple simultaneous 3D scans of cerebral vasculature.

    PubMed

    Smith, Stephen W; Ivancevich, Nikolas M; Lindsey, Brooks D; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A; Laskowitz, Daniel T

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time three-dimensional (3D) scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging (Durham, NC, USA) real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64 degrees pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128 degrees sector, two simultaneous parasagittal images merged into a 128 degrees x 64 degrees C-mode plane and a simultaneous 64 degrees axial image. Real-time 3D color Doppler scans from a skull phantom with latex blood vessel were obtained after contrast agent injection as a proof of concept. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  9. A Real Time System for Multi-Sensor Image Analysis through Pyramidal Segmentation

    DTIC Science & Technology

    1992-01-30

    A Real Time Syte for M~ulti- sensor Image Analysis S. E I0 through Pyramidal Segmentation/ / c •) L. Rudin, S. Osher, G. Koepfler, J.9. Morel 7. ytu...experiments with reconnaissance photography, multi- sensor satellite imagery, medical CT and MRI multi-band data have shown a great practi- cal potential...C ,SF _/ -- / WSM iS-I-0-d41-40450 $tltwt, kw" I (nor.- . Z-97- A real-time system for multi- sensor image analysis through pyramidal segmentation

  10. Real-time implementations of image segmentation algorithms on shared memory multicore architecture: a survey (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Akil, Mohamed

    2017-05-01

    The real-time processing is getting more and more important in many image processing applications. Image segmentation is one of the most fundamental tasks image analysis. As a consequence, many different approaches for image segmentation have been proposed. The watershed transform is a well-known image segmentation tool. The watershed transform is a very data intensive task. To achieve acceleration and obtain real-time processing of watershed algorithms, parallel architectures and programming models for multicore computing have been developed. This paper focuses on the survey of the approaches for parallel implementation of sequential watershed algorithms on multicore general purpose CPUs: homogeneous multicore processor with shared memory. To achieve an efficient parallel implementation, it's necessary to explore different strategies (parallelization/distribution/distributed scheduling) combined with different acceleration and optimization techniques to enhance parallelism. In this paper, we give a comparison of various parallelization of sequential watershed algorithms on shared memory multicore architecture. We analyze the performance measurements of each parallel implementation and the impact of the different sources of overhead on the performance of the parallel implementations. In this comparison study, we also discuss the advantages and disadvantages of the parallel programming models. Thus, we compare the OpenMP (an application programming interface for multi-Processing) with Ptheads (POSIX Threads) to illustrate the impact of each parallel programming model on the performance of the parallel implementations.

  11. Telerobotic system concept for real-time soft-tissue imaging during radiotherapy beam delivery.

    PubMed

    Schlosser, Jeffrey; Salisbury, Kenneth; Hristov, Dimitre

    2010-12-01

    The curative potential of external beam radiation therapy is critically dependent on having the ability to accurately aim radiation beams at intended targets while avoiding surrounding healthy tissues. However, existing technologies are incapable of real-time, volumetric, soft-tissue imaging during radiation beam delivery, when accurate target tracking is most critical. The authors address this challenge in the development and evaluation of a novel, minimally interfering, telerobotic ultrasound (U.S.) imaging system that can be integrated with existing medical linear accelerators (LINACs) for therapy guidance. A customized human-safe robotic manipulator was designed and built to control the pressure and pitch of an abdominal U.S. transducer while avoiding LINAC gantry collisions. A haptic device was integrated to remotely control the robotic manipulator motion and U.S. image acquisition outside the LINAC room. The ability of the system to continuously maintain high quality prostate images was evaluated in volunteers over extended time periods. Treatment feasibility was assessed by comparing a clinically deployed prostate treatment plan to an alternative plan in which beam directions were restricted to sectors that did not interfere with the transabdominal U.S. transducer. To demonstrate imaging capability concurrent with delivery, robot performance and U.S. target tracking in a phantom were tested with a 15 MV radiation beam active. Remote image acquisition and maintenance of image quality with the haptic interface was successfully demonstrated over 10 min periods in representative treatment setups of volunteers. Furthermore, the robot's ability to maintain a constant probe force and desired pitch angle was unaffected by the LINAC beam. For a representative prostate patient, the dose-volume histogram (DVH) for a plan with restricted sectors remained virtually identical to the DVH of a clinically deployed plan. With reduced margins, as would be enabled by real-time imaging, gross tumor volume coverage was identical while notable reductions of bladder and rectal volumes exposed to large doses were possible. The quality of U.S. images obtained during beam operation was not appreciably degraded by radiofrequency interference and 2D tracking of a phantom object in U.S. images obtained with the beam on/off yielded no significant differences. Remotely controlled robotic U.S. imaging is feasible in the radiotherapy environment and for the first time may offer real-time volumetric soft-tissue guidance concurrent with radiotherapy delivery.

  12. Simultaneous mapping of pan and sentinel lymph nodes for real-time image-guided surgery.

    PubMed

    Ashitate, Yoshitomo; Hyun, Hoon; Kim, Soon Hee; Lee, Jeong Heon; Henary, Maged; Frangioni, John V; Choi, Hak Soo

    2014-01-01

    The resection of regional lymph nodes in the basin of a primary tumor is of paramount importance in surgical oncology. Although sentinel lymph node mapping is now the standard of care in breast cancer and melanoma, over 20% of patients require a completion lymphadenectomy. Yet, there is currently no technology available that can image all lymph nodes in the body in real time, or assess both the sentinel node and all nodes simultaneously. In this study, we report an optical fluorescence technology that is capable of simultaneous mapping of pan lymph nodes (PLNs) and sentinel lymph nodes (SLNs) in the same subject. We developed near-infrared fluorophores, which have fluorescence emission maxima either at 700 nm or at 800 nm. One was injected intravenously for identification of all regional lymph nodes in a basin, and the other was injected locally for identification of the SLN. Using the dual-channel FLARE intraoperative imaging system, we could identify and resect all PLNs and SLNs simultaneously. The technology we describe enables simultaneous, real-time visualization of both PLNs and SLNs in the same subject.

  13. Assessment of cardiac time intervals using high temporal resolution real-time spiral phase contrast with UNFOLDed-SENSE.

    PubMed

    Kowalik, Grzegorz T; Knight, Daniel S; Steeden, Jennifer A; Tann, Oliver; Odille, Freddy; Atkinson, David; Taylor, Andrew; Muthurangu, Vivek

    2015-02-01

    To develop a real-time phase contrast MR sequence with high enough temporal resolution to assess cardiac time intervals. The sequence utilized spiral trajectories with an acquisition strategy that allowed a combination of temporal encoding (Unaliasing by fourier-encoding the overlaps using the temporal dimension; UNFOLD) and parallel imaging (Sensitivity encoding; SENSE) to be used (UNFOLDed-SENSE). An in silico experiment was performed to determine the optimum UNFOLD filter. In vitro experiments were carried out to validate the accuracy of time intervals calculation and peak mean velocity quantification. In addition, 15 healthy volunteers were imaged with the new sequence, and cardiac time intervals were compared to reference standard Doppler echocardiography measures. For comparison, in silico, in vitro, and in vivo experiments were also carried out using sliding window reconstructions. The in vitro experiments demonstrated good agreement between real-time spiral UNFOLDed-SENSE phase contrast MR and the reference standard measurements of velocity and time intervals. The protocol was successfully performed in all volunteers. Subsequent measurement of time intervals produced values in keeping with literature values and good agreement with the gold standard echocardiography. Importantly, the proposed UNFOLDed-SENSE sequence outperformed the sliding window reconstructions. Cardiac time intervals can be successfully assessed with UNFOLDed-SENSE real-time spiral phase contrast. Real-time MR assessment of cardiac time intervals may be beneficial in assessment of patients with cardiac conditions such as diastolic dysfunction. © 2014 Wiley Periodicals, Inc.

  14. On the possibility of producing true real-time retinal cross-sectional images using a graphics processing unit enhanced master-slave optical coherence tomography system.

    PubMed

    Bradu, Adrian; Kapinchev, Konstantin; Barnes, Frederick; Podoleanu, Adrian

    2015-07-01

    In a previous report, we demonstrated master-slave optical coherence tomography (MS-OCT), an OCT method that does not need resampling of data and can be used to deliver en face images from several depths simultaneously. In a separate report, we have also demonstrated MS-OCT's capability of producing cross-sectional images of a quality similar to those provided by the traditional Fourier domain (FD) OCT technique, but at a much slower rate. Here, we demonstrate that by taking advantage of the parallel processing capabilities offered by the MS-OCT method, cross-sectional OCT images of the human retina can be produced in real time. We analyze the conditions that ensure a true real-time B-scan imaging operation and demonstrate in vivo real-time images from human fovea and the optic nerve, with resolution and sensitivity comparable to those produced using the traditional FD-based method, however, without the need of data resampling.

  15. Detection of grapes in natural environment using HOG features in low resolution images

    NASA Astrophysics Data System (ADS)

    Škrabánek, Pavel; Majerík, Filip

    2017-07-01

    Detection of grapes in real-life images has importance in various viticulture applications. A grape detector based on an SVM classifier, in combination with a HOG descriptor, has proven to be very efficient in detection of white varieties in high-resolution images. Nevertheless, the high time complexity of such utilization was not suitable for its real-time applications, even when a detector of a simplified structure was used. Thus, we examined possibilities of the simplified version application on images of lower resolutions. For this purpose, we designed a method aimed at search for a detector’s setting which gives the best time complexity vs. performance ratio. In order to provide precise evaluation results, we formed new extended datasets. We discovered that even applied on low-resolution images, the simplified detector, with an appropriate setting of all tuneable parameters, was competitive with other state of the art solutions. We concluded that the detector is qualified for real-time detection of grapes in real-life images.

  16. Grayscale image segmentation for real-time traffic sign recognition: the hardware point of view

    NASA Astrophysics Data System (ADS)

    Cao, Tam P.; Deng, Guang; Elton, Darrell

    2009-02-01

    In this paper, we study several grayscale-based image segmentation methods for real-time road sign recognition applications on an FPGA hardware platform. The performance of different image segmentation algorithms in different lighting conditions are initially compared using PC simulation. Based on these results and analysis, suitable algorithms are implemented and tested on a real-time FPGA speed sign detection system. Experimental results show that the system using segmented images uses significantly less hardware resources on an FPGA while maintaining comparable system's performance. The system is capable of processing 60 live video frames per second.

  17. A fiducial detection algorithm for real-time image guided IMRT based on simultaneous MV and kV imaging

    PubMed Central

    Mao, Weihua; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Xing, Lei

    2008-01-01

    The advantage of highly conformal dose techniques such as 3DCRT and IMRT is limited by intrafraction organ motion. A new approach to gain near real-time 3D positions of internally implanted fiducial markers is to analyze simultaneous onboard kV beam and treatment MV beam images (from fluoroscopic or electronic portal image devices). Before we can use this real-time image guidance for clinical 3DCRT and IMRT treatments, four outstanding issues need to be addressed. (1) How will fiducial motion blur the image and hinder tracking fiducials? kV and MV images are acquired while the tumor is moving at various speeds. We find that a fiducial can be successfully detected at a maximum linear speed of 1.6 cm∕s. (2) How does MV beam scattering affect kV imaging? We investigate this by varying MV field size and kV source to imager distance, and find that common treatment MV beams do not hinder fiducial detection in simultaneous kV images. (3) How can one detect fiducials on images from 3DCRT and IMRT treatment beams when the MV fields are modified by a multileaf collimator (MLC)? The presented analysis is capable of segmenting a MV field from the blocking MLC and detecting visible fiducials. This enables the calculation of nearly real-time 3D positions of markers during a real treatment. (4) Is the analysis fast enough to track fiducials in nearly real time? Multiple methods are adopted to predict marker positions and reduce search regions. The average detection time per frame for three markers in a 1024×768 image was reduced to 0.1 s or less. Solving these four issues paves the way to tracking moving fiducial markers throughout a 3DCRT or IMRT treatment. Altogether, these four studies demonstrate that our algorithm can track fiducials in real time, on degraded kV images (MV scatter), in rapidly moving tumors (fiducial blurring), and even provide useful information in the case when some fiducials are blocked from view by the MLC. This technique can provide a gating signal or be used for intra-fractional tumor tracking on a Linac equipped with a kV imaging system. Any motion exceeding a preset threshold can warn the therapist to suspend a treatment session and reposition the patient. PMID:18777916

  18. Three-dimensional online surface reconstruction of augmented fluorescence lifetime maps using photometric stereo (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Unger, Jakob; Lagarto, Joao; Phipps, Jennifer; Ma, Dinglong; Bec, Julien; Sorger, Jonathan; Farwell, Gregory; Bold, Richard; Marcu, Laura

    2017-02-01

    Multi-Spectral Time-Resolved Fluorescence Spectroscopy (ms-TRFS) can provide label-free real-time feedback on tissue composition and pathology during surgical procedures by resolving the fluorescence decay dynamics of the tissue. Recently, an ms-TRFS system has been developed in our group, allowing for either point-spectroscopy fluorescence lifetime measurements or dynamic raster tissue scanning by merging a 450 nm aiming beam with the pulsed fluorescence excitation light in a single fiber collection. In order to facilitate an augmented real-time display of fluorescence decay parameters, the lifetime values are back projected to the white light video. The goal of this study is to develop a 3D real-time surface reconstruction aiming for a comprehensive visualization of the decay parameters and providing an enhanced navigation for the surgeon. Using a stereo camera setup, we use a combination of image feature matching and aiming beam stereo segmentation to establish a 3D surface model of the decay parameters. After camera calibration, texture-related features are extracted for both camera images and matched providing a rough estimation of the surface. During the raster scanning, the rough estimation is successively refined in real-time by tracking the aiming beam positions using an advanced segmentation algorithm. The method is evaluated for excised breast tissue specimens showing a high precision and running in real-time with approximately 20 frames per second. The proposed method shows promising potential for intraoperative navigation, i.e. tumor margin assessment. Furthermore, it provides the basis for registering the fluorescence lifetime maps to the tissue surface adapting it to possible tissue deformations.

  19. Single DMD time-multiplexed 64-views autostereoscopic 3D display

    NASA Astrophysics Data System (ADS)

    Loreti, Luigi

    2013-03-01

    Based on previous prototype of the Real time 3D holographic display developed last year, we developed a new concept of auto-stereoscopic multiview display (64 views), wide angle (90°) 3D full color display. The display is based on a RGB laser light source illuminating a DMD (Discovery 4100 0,7") at 24.000 fps, an image deflection system made with an AOD (Acoustic Optic Deflector) driven by a piezo-electric transducer generating a variable standing acoustic wave on the crystal that acts as a phase grating. The DMD projects in fast sequence 64 point of view of the image on the crystal cube. Depending on the frequency of the standing wave, the input picture sent by the DMD is deflected in different angle of view. An holographic screen at a proper distance diffuse the rays in vertical direction (60°) and horizontally select (1°) only the rays directed to the observer. A telescope optical system will enlarge the image to the right dimension. A VHDL firmware to render in real-time (16 ms) 64 views (16 bit 4:2:2) of a CAD model (obj, dxf or 3Ds) and depth-map encoded video images was developed into the resident Virtex5 FPGA of the Discovery 4100 SDK, thus eliminating the needs of image transfer and high speed links

  20. The ultrasound brain helmet: early human feasibility study of multiple simultaneous 3D scans of cerebral vasculature

    NASA Astrophysics Data System (ADS)

    Lindsey, Brooks D.; Ivancevich, Nikolas M.; Whitman, John; Light, Edward; Fronheiser, Matthew; Nicoletto, Heather A.; Laskowitz, Daniel T.; Smith, Stephen W.

    2009-02-01

    We describe early stage experiments to test the feasibility of an ultrasound brain helmet to produce multiple simultaneous real-time 3D scans of the cerebral vasculature from temporal and suboccipital acoustic windows of the skull. The transducer hardware and software of the Volumetrics Medical Imaging real-time 3D scanner were modified to support dual 2.5 MHz matrix arrays of 256 transmit elements and 128 receive elements which produce two simultaneous 64° pyramidal scans. The real-time display format consists of two coronal B-mode images merged into a 128° sector, two simultaneous parasagittal images merged into a 128° × 64° C-mode plane, and a simultaneous 64° axial image. Real-time 3D color Doppler images acquired in initial clinical studies after contrast injection demonstrate flow in several representative blood vessels. An offline Doppler rendering of data from two transducers simultaneously scanning via the temporal windows provides an early visualization of the flow in vessels on both sides of the brain. The long-term goal is to produce real-time 3D ultrasound images of the cerebral vasculature from a portable unit capable of internet transmission, thus enabling interactive 3D imaging, remote diagnosis and earlier therapeutic intervention. We are motivated by the urgency for rapid diagnosis of stroke due to the short time window of effective therapeutic intervention.

  1. Reusable Client-Side JavaScript Modules for Immersive Web-Based Real-Time Collaborative Neuroimage Visualization

    PubMed Central

    Bernal-Rusiel, Jorge L.; Rannou, Nicolas; Gollub, Randy L.; Pieper, Steve; Murphy, Shawn; Robertson, Richard; Grant, Patricia E.; Pienaar, Rudolph

    2017-01-01

    In this paper we present a web-based software solution to the problem of implementing real-time collaborative neuroimage visualization. In both clinical and research settings, simple and powerful access to imaging technologies across multiple devices is becoming increasingly useful. Prior technical solutions have used a server-side rendering and push-to-client model wherein only the server has the full image dataset. We propose a rich client solution in which each client has all the data and uses the Google Drive Realtime API for state synchronization. We have developed a small set of reusable client-side object-oriented JavaScript modules that make use of the XTK toolkit, a popular open-source JavaScript library also developed by our team, for the in-browser rendering and visualization of brain image volumes. Efficient realtime communication among the remote instances is achieved by using just a small JSON object, comprising a representation of the XTK image renderers' state, as the Google Drive Realtime collaborative data model. The developed open-source JavaScript modules have already been instantiated in a web-app called MedView, a distributed collaborative neuroimage visualization application that is delivered to the users over the web without requiring the installation of any extra software or browser plugin. This responsive application allows multiple physically distant physicians or researchers to cooperate in real time to reach a diagnosis or scientific conclusion. It also serves as a proof of concept for the capabilities of the presented technological solution. PMID:28507515

  2. PMMW Camera TRP. Phase 1

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Passive millimeter wave (PMMW) sensors have the ability to see through fog, clouds, dust and sandstorms and thus have the potential to support all-weather operations, both military and commercial. Many of the applications, such as military transport or commercial aircraft landing, are technologically stressing in that they require imaging of a scene with a large field of view in real time and with high spatial resolution. The development of a low cost PMMW focal plane array camera is essential to obtain real-time video images to fulfill the above needs. The overall objective of this multi-year project (Phase 1) was to develop and demonstrate the capabilities of a W-band PMMW camera with a microwave/millimeter wave monolithic integrated circuit (MMIC) focal plane array (FPA) that can be manufactured at low cost for both military and commercial applications. This overall objective was met in July 1997 when the first video images from the camera were generated of an outdoor scene. In addition, our consortium partner McDonnell Douglas was to develop a real-time passive millimeter wave flight simulator to permit pilot evaluation of a PMMW-equipped aircraft in a landing scenario. A working version of this simulator was completed. This work was carried out under the DARPA-funded PMMW Camera Technology Reinvestment Project (TRP), also known as the PMMW Camera DARPA Joint Dual-Use Project. In this final report for the Phase 1 activities, a year by year description of what the specific objectives were, the approaches taken, and the progress made is presented, followed by a description of the validation and imaging test results obtained in 1997.

  3. The development of confocal arthroscopy as optical histology for rotator cuff tendinopathy.

    PubMed

    Wu, J-P; Walton, M; Wang, A; Anderson, P; Wang, T; Kirk, T B; Zheng, M H

    2015-09-01

    MRI, ultrasound and video arthroscopy are traditional imaging technologies for noninvasive or minimal invasive assessment of the rotator cuff tendon pathology. However, these imaging modalities do not have sufficient resolution to demonstrate the pathology of rotator cuff tendons at a microstructural level. Therefore, they are insensitive to low-level tendon diseases. Although traditional histology can be used to analyze the physiology of rotator cuff tendons, it requires biopsy that traumatizes the rotator cuff, thus, potentially comprising the mechanical properties of tendons. Besides, it cannot offer real-time histological information. Confocal endoscopy offers a way to assess the microstructural disorder in tissues without biopsy. However, the application of this useful technique for detecting low-level tendon diseases has been restricted by using clinical grade fluorescent contrast agent to acquire high-resolution microstructural images of tendons. In this study, using a clinical grade sodium fluorescein contrast agent, we have reported the development of confocal arthroscopy for optical histological assessment without biopsy. The confocal arthroscopic technique was able to demonstrate rotator cuff tendinopathy in human cadavers, which appeared macroscopically normal under video arthroscopic examinations. The tendinopathy status of the rotator cuff tendons was confirmed by corresponding traditional histology. The development of confocal arthroscopy may provide a minimally invasive imaging technique for real-time histology of rotator cuff without the need for tissue biopsy. This technique has the potential for surgeons to gain in real time the histological information of rotator cuff tendons, which may assist planning repair strategies and potentially improve intervention outcomes. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  4. Ultrasound - Breast

    MedlinePlus

    ... the patient. Because ultrasound images are captured in real-time, they can show the structure and movement of ... perform an ultrasound-guided biopsy . Because ultrasound provides real-time images, it is often used to guide biopsy ...

  5. The Effects of Real-Time Interactive Multimedia Teleradiology System

    PubMed Central

    Al-Safadi, Lilac

    2016-01-01

    This study describes the design of a real-time interactive multimedia teleradiology system and assesses how the system is used by referring physicians in point-of-care situations and supports or hinders aspects of physician-radiologist interaction. We developed a real-time multimedia teleradiology management system that automates the transfer of images and radiologists' reports and surveyed physicians to triangulate the findings and to verify the realism and results of the experiment. The web-based survey was delivered to 150 physicians from a range of specialties. The survey was completed by 72% of physicians. Data showed a correlation between rich interactivity, satisfaction, and effectiveness. The results of our experiments suggest that real-time multimedia teleradiology systems are valued by referring physicians and may have the potential for enhancing their practice and improving patient care and highlight the critical role of multimedia technologies to provide real-time multimode interactivity in current medical care. PMID:27294118

  6. A bright future for bioluminescent imaging in viral research

    PubMed Central

    Coleman, Stewart M; McGregor, Alistair

    2015-01-01

    Summary Bioluminescence imaging (BLI) has emerged as a powerful tool in the study of animal models of viral disease. BLI enables real-time in vivo study of viral infection, host immune response and the efficacy of intervention strategies. Substrate dependent light emitting luciferase enzyme when incorporated into a virus as a reporter gene enables detection of bioluminescence from infected cells using sensitive charge-coupled device (CCD) camera systems. Advantages of BLI include low background, real-time tracking of infection in the same animal and reduction in the requirement for larger animal numbers. Transgenic luciferase-tagged mice enable the use of pre-existing nontagged viruses in BLI studies. Continued development in luciferase reporter genes, substrates, transgenic animals and imaging systems will greatly enhance future BLI strategies in viral research. PMID:26413138

  7. Characterization techniques for incorporating backgrounds into DIRSIG

    NASA Astrophysics Data System (ADS)

    Brown, Scott D.; Schott, John R.

    2000-07-01

    The appearance of operation hyperspectral imaging spectrometers in both solar and thermal regions has lead to the development of a variety of spectral detection algorithms. The development and testing of these algorithms requires well characterized field collection campaigns that can be time and cost prohibitive. Radiometrically robust synthetic image generation (SIG) environments that can generate appropriate images under a variety of atmospheric conditions and with a variety of sensors offers an excellent supplement to reduce the scope of the expensive field collections. In addition, SIG image products provide the algorithm developer with per-pixel truth, allowing for improved characterization of the algorithm performance. To meet the needs of the algorithm development community, the image modeling community needs to supply synthetic image products that contain all the spatial and spectral variability present in real world scenes, and that provide the large area coverage typically acquired with actual sensors. This places a heavy burden on synthetic scene builders to construct well characterized scenes that span large areas. Several SIG models have demonstrated the ability to accurately model targets (vehicles, buildings, etc.) Using well constructed target geometry (from CAD packages) and robust thermal and radiometry models. However, background objects (vegetation, infrastructure, etc.) dominate the percentage of real world scene pixels and utilizing target building techniques is time and resource prohibitive. This paper discusses new methods that have been integrated into the Digital Imaging and Remote Sensing Image Generation (DIRSIG) model to characterize backgrounds. The new suite of scene construct types allows the user to incorporate both terrain and surface properties to obtain wide area coverage. The terrain can be incorporated using a triangular irregular network (TIN) derived from elevation data or digital elevation model (DEM) data from actual sensors, temperature maps, spectral reflectance cubes (possible derived from actual sensors), and/or material and mixture maps. Descriptions and examples of each new technique are presented as well as hybrid methods to demonstrate target embedding in real world imagery.

  8. Design and assessment of compact optical systems towards special effects imaging

    NASA Astrophysics Data System (ADS)

    Shaoulov, Vesselin Iossifov

    A main challenge in the field of special effects is to create special effects in real time in a way that the user can preview the effect before taking the actual picture or movie sequence. There are many techniques currently used to create computer-simulated special effects, however current techniques in computer graphics do not provide the option for the creation of real-time texture synthesis. Thus, while computer graphics is a powerful tool in the field of special effects, it is neither portable nor does it provide work in real-time capabilities. Real-time special effects may, however, be created optically. Such approach will provide not only real-time image processing at the speed of light but also a preview option, allowing the user or the artist to preview the effect on various parts of the object in order to optimize the outcome. The work presented in this dissertation was inspired by the idea of optically created special effects, such as painterly effects, encoded in images captured by photographic or motion picture cameras. As part of the presented work, compact relay optics was assessed, developed, and a working prototype was built. It was concluded that even though compact relay optics can be achieved, further push for compactness and cost-effectiveness was impossible in the paradigm of bulk macro-optics systems. Thus, a paradigm for imaging with multi-aperture micro-optics was proposed and demonstrated for the first time, which constitutes one of the key contributions of this work. This new paradigm was further extended to the most general case of magnifying multi-aperture micro-optical systems. Such paradigm allows an extreme reduction in size of the imaging optics by a factor of about 10 and a reduction in weight by a factor of about 500. Furthermore, an experimental quantification of the feasibility of optically created special effects was completed, and consequently raytracing software was developed, which was later commercialized by SmARTLens(TM). While the art forms created via raytracing were powerful, they did not predict all effects acquired experimentally. Thus, finally, as key contribution of this work, the principles of scalar diffraction theory were applied to optical imaging of extended objects under quasi-monochromatic incoherent illumination in order to provide a path to more accurately model the proposed optical imaging process for special effects obtained in the hardware. The existing theoretical framework was generalized to non-paraxial in- and out-of-focus imaging and results were obtained to verify the generalized framework. In the generalized non-paraxial framework, even the most complex linear systems, without any assumptions for shift invariance, can be modeled and analyzed.

  9. Real time quantitative phase microscopy based on single-shot transport of intensity equation (ssTIE) method

    NASA Astrophysics Data System (ADS)

    Yu, Wei; Tian, Xiaolin; He, Xiaoliang; Song, Xiaojun; Xue, Liang; Liu, Cheng; Wang, Shouyu

    2016-08-01

    Microscopy based on transport of intensity equation provides quantitative phase distributions which opens another perspective for cellular observations. However, it requires multi-focal image capturing while mechanical and electrical scanning limits its real time capacity in sample detections. Here, in order to break through this restriction, real time quantitative phase microscopy based on single-shot transport of the intensity equation method is proposed. A programmed phase mask is designed to realize simultaneous multi-focal image recording without any scanning; thus, phase distributions can be quantitatively retrieved in real time. It is believed the proposed method can be potentially applied in various biological and medical applications, especially for live cell imaging.

  10. Vision-based overlay of a virtual object into real scene for designing room interior

    NASA Astrophysics Data System (ADS)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  11. All Sky Imager Network for Science and Education

    NASA Astrophysics Data System (ADS)

    Bhatt, A.; Kendall, E. A.; Zalles, D. R.; Baumgardner, J. L.; Marshall, R. A.; Kaltenbacher, E.

    2012-12-01

    A new all sky imager network for space weather monitoring and education outreach has been developed by SRI International. The goal of this program is to install sensitive, low-light all-sky imagers across the continental United States to observe upper atmospheric airglow and aurora in near real time. While aurora borealis is often associated with the high latitudes, during intense geomagnetic storms it can extend well into the continental United States latitudes. Observing auroral processes is instrumental in understanding the space weather, especially in the times of increasing societal dependence on space-based technologies. Under the THEMIS satellite program, Canada has installed a network of all-sky imagers across their country to monitor aurora in real-time. However, no comparable effort exists in the United States. Knowledge of the aurora and airglow across the entire United States in near real time would allow scientists to quickly assess the impact of a geomagnetic storm in concert with data from GPS networks, ionosondes, radars, and magnetometers. What makes this effort unique is that we intend to deploy these imagers at high schools across the country. Selected high-schools will necessarily be in rural areas as the instrument requires dark night skies. At the commencement of the school year, we plan to give an introductory seminar on space weather at each of these schools. Science nuggets developed by SRI International in collaboration with the Center for GeoSpace Studies and the Center for Technology in Learning will be available for high school teachers to use during their science classes. Teachers can use these nuggets as desired within their own curricula. We intend to develop a comprehensive web-based interface that will be available for students and scientific community alike to observe data across the network in near real time and also to guide students towards complementary space weather data sets. This interface will show the real time extent of auroral precipitation. The all sky imager package is designed to be a low-budget self-contained scientific instrument. The schools will need to only provide power and internet. The external package is an insulated, heat-controlled box roughly 2'x2'x1' in dimension. Inside, an astronomy-grade monochromatic camera is coupled with telecentric optics and a narrowband filter designed for the wavelength of the airglow or auroral phenomena of interest. Thus far, a prototype instrument has been installed at the Pescadero High School in Pescadero, CA after testing and calibration at the McDonald Observatory in Texas. A science seminar was delivered and science nuggets are being tested in an introductory science class as well as an upper level astronomy course. This poster will show all of the above mentioned aspects of this project.

  12. Development of a viability standard curve for microencapsulated probiotic bacteria using confocal microscopy and image analysis software.

    PubMed

    Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R

    2015-07-01

    Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Carotid Ultrasound Imaging

    MedlinePlus

    ... the patient. Because ultrasound images are captured in real-time, they can show the structure and movement of ... by a computer, which in turn creates a real-time picture on the monitor. One or more frames ...

  14. Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process

    NASA Astrophysics Data System (ADS)

    Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.

    2015-02-01

    This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.

  15. In situ real-time imaging of self-sorted supramolecular nanofibres

    NASA Astrophysics Data System (ADS)

    Onogi, Shoji; Shigemitsu, Hajime; Yoshii, Tatsuyuki; Tanida, Tatsuya; Ikeda, Masato; Kubota, Ryou; Hamachi, Itaru

    2016-08-01

    Self-sorted supramolecular nanofibres—a multicomponent system that consists of several types of fibre, each composed of distinct building units—play a crucial role in complex, well-organized systems with sophisticated functions, such as living cells. Designing and controlling self-sorting events in synthetic materials and understanding their structures and dynamics in detail are important elements in developing functional artificial systems. Here, we describe the in situ real-time imaging of self-sorted supramolecular nanofibre hydrogels consisting of a peptide gelator and an amphiphilic phosphate. The use of appropriate fluorescent probes enabled the visualization of self-sorted fibres entangled in two and three dimensions through confocal laser scanning microscopy and super-resolution imaging, with 80 nm resolution. In situ time-lapse imaging showed that the two types of fibre have different formation rates and that their respective physicochemical properties remain intact in the gel. Moreover, we directly visualized stochastic non-synchronous fibre formation and observed a cooperative mechanism.

  16. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi-algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6201s each has a local memory space, and they also have a shared system memory space which enables them to intercommunicate and exchange data efficiently. At the same time, they can be directly inter-connected in star-shaped architecture. All of these are under the control of a FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  17. DDGIPS: a general image processing system in robot vision

    NASA Astrophysics Data System (ADS)

    Tian, Yuan; Ying, Jun; Ye, Xiuqing; Gu, Weikang

    2000-10-01

    Real-Time Image Processing is the key work in robot vision. With the limitation of the hardware technique, many algorithm-oriented firmware systems were designed in the past. But their architectures were not flexible enough to achieve a multi- algorithm development system. Because of the rapid development of microelectronics technique, many high performance DSP chips and high density FPGA chips have come to life, and this makes it possible to construct a more flexible architecture in real-time image processing system. In this paper, a Double DSP General Image Processing System (DDGIPS) is concerned. We try to construct a two-DSP-based FPGA-computational system with two TMS320C6201s. The TMS320C6x devices are fixed-point processors based on the advanced VLIW CPU, which has eight functional units, including two multipliers and six arithmetic logic units. These features make C6x a good candidate for a general purpose system. In our system, the two TMS320C6210s each has a local memory space, and they also have a shared system memory space which enable them to intercommunicate and exchange data efficiently. At the same time, they can be directly interconnected in star- shaped architecture. All of these are under the control of FPGA group. As the core of the system, FPGA plays a very important role: it takes charge of DPS control, DSP communication, memory space access arbitration and the communication between the system and the host machine. And taking advantage of reconfiguring FPGA, all of the interconnection between the two DSP or between DSP and FPGA can be changed. In this way, users can easily rebuild the real-time image processing system according to the data stream and the task of the application and gain great flexibility.

  18. Toward real-time tumor margin identification in image-guided robotic brain tumor resection

    NASA Astrophysics Data System (ADS)

    Hu, Danying; Jiang, Yang; Belykh, Evgenii; Gong, Yuanzheng; Preul, Mark C.; Hannaford, Blake; Seibel, Eric J.

    2017-03-01

    For patients with malignant brain tumors (glioblastomas), a safe maximal resection of tumor is critical for an increased survival rate. However, complete resection of the cancer is hard to achieve due to the invasive nature of these tumors, where the margins of the tumors become blurred from frank tumor to more normal brain tissue, but in which single cells or clusters of malignant cells may have invaded. Recent developments in fluorescence imaging techniques have shown great potential for improved surgical outcomes by providing surgeons intraoperative contrast-enhanced visual information of tumor in neurosurgery. The current near-infrared (NIR) fluorophores, such as indocyanine green (ICG), cyanine5.5 (Cy5.5), 5-aminolevulinic acid (5-ALA)-induced protoporphyrin IX (PpIX), are showing clinical potential to be useful in targeting and guiding resections of such tumors. Real-time tumor margin identification in NIR imaging could be helpful to both surgeons and patients by reducing the operation time and space required by other imaging modalities such as intraoperative MRI, and has the potential to integrate with robotically assisted surgery. In this paper, a segmentation method based on the Chan-Vese model was developed for identifying the tumor boundaries in an ex-vivo mouse brain from relatively noisy fluorescence images acquired by a multimodal scanning fiber endoscope (mmSFE). Tumor contours were achieved iteratively by minimizing an energy function formed by a level set function and the segmentation model. Quantitative segmentation metrics based on tumor-to-background (T/B) ratio were evaluated. Results demonstrated feasibility in detecting the brain tumor margins at quasi-real-time and has the potential to yield improved precision brain tumor resection techniques or even robotic interventions in the future.

  19. WE-AB-303-06: Combining DAO with MV + KV Optimization to Improve Skin Dose Sparing with Real-Time Fluoroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grelewicz, Z; Wiersma, R

    Purpose: Real-time fluoroscopy may allow for improved patient positioning and tumor tracking, particularly in the treatment of lung tumors. In order to mitigate the effects of the imaging dose, previous studies have demonstrated the effect of including both imaging dose and imaging constraints into the inverse treatment planning object function. That method of combined MV+kV optimization may Result in plans with treatment beams chosen to allow for more gentle imaging beam-on times. Direct-aperture optimization (DAO) is also known to produce treatment plans with fluence maps more conducive to lower beam-on times. Therefore, in this work we demonstrate the feasibility ofmore » a combination of DAO and MV+kV optimization for further optimized real-time kV imaging. Methods: Therapeutic and imaging beams were modeled in the EGSnrc Monte Carlo environment, and applied to a patient model for a previously treated lung patient to provide dose influence matrices from DOSXYZnrc. An MV + kV IMRT DAO treatment planning system was developed to compare DAO treatment plans with and without MV+kV optimization. The objective function was optimized using simulated annealing. In order to allow for comparisons between different cases of the stochastically optimized plans, the optimization was repeated twenty times. Results: Across twenty optimizations, combined MV+kV IMRT resulted in an average of 12.8% reduction in peak skin dose. Both non-optimized and MV+kV optimized imaging beams delivered, on average, mean dose of approximately 1 cGy per fraction to the target, with peak doses to target of approximately 6 cGy per fraction. Conclusion: When using DAO, MV+kV optimization is shown to Result in improvements to plan quality in terms of skin dose, when compared to the case of MV optimization with non-optimized kV imaging. The combination of DAO and MV+kV optimization may allow for real-time imaging without excessive imaging dose. Financial support for the work has been provided in part by NIH Grant T32 EB002103, ACS RSG-13-313-01-CCE, and NIH S10 RR021039 and P30 CA14599 grants. The contents of this submission do not necessarily represent the official views of any of the supporting organizations.« less

  20. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    NASA Astrophysics Data System (ADS)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  1. Integrated test system of infrared and laser data based on USB 3.0

    NASA Astrophysics Data System (ADS)

    Fu, Hui Quan; Tang, Lin Bo; Zhang, Chao; Zhao, Bao Jun; Li, Mao Wen

    2017-07-01

    Based on USB3.0, this paper presents the design method of an integrated test system for both infrared image data and laser signal data processing module. The core of the design is FPGA logic control, the design uses dual-chip DDR3 SDRAM to achieve high-speed laser data cache, and receive parallel LVDS image data through serial-to-parallel conversion chip, and it achieves high-speed data communication between the system and host computer through the USB3.0 bus. The experimental results show that the developed PC software realizes the real-time display of 14-bit LVDS original image after 14-to-8 bit conversion and JPEG2000 compressed image after decompression in software, and can realize the real-time display of the acquired laser signal data. The correctness of the test system design is verified, indicating that the interface link is normal.

  2. Advances in real-time magnetic resonance imaging of the vocal tract for speech science and technology research.

    PubMed

    Toutios, Asterios; Narayanan, Shrikanth S

    2016-01-01

    Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development.

  3. Advances in real-time magnetic resonance imaging of the vocal tract for speech science and technology research

    PubMed Central

    TOUTIOS, ASTERIOS; NARAYANAN, SHRIKANTH S.

    2016-01-01

    Real-time magnetic resonance imaging (rtMRI) of the moving vocal tract during running speech production is an important emerging tool for speech production research providing dynamic information of a speaker's upper airway from the entire mid-sagittal plane or any other scan plane of interest. There have been several advances in the development of speech rtMRI and corresponding analysis tools, and their application to domains such as phonetics and phonological theory, articulatory modeling, and speaker characterization. An important recent development has been the open release of a database that includes speech rtMRI data from five male and five female speakers of American English each producing 460 phonetically balanced sentences. The purpose of the present paper is to give an overview and outlook of the advances in rtMRI as a tool for speech research and technology development. PMID:27833745

  4. Digital Image Processing Overview For Helmet Mounted Displays

    NASA Astrophysics Data System (ADS)

    Parise, Michael J.

    1989-09-01

    Digital image processing provides a means to manipulate an image and presents a user with a variety of display formats that are not available in the analog image processing environment. When performed in real time and presented on a Helmet Mounted Display, system capability and flexibility are greatly enhanced. The information content of a display can be increased by the addition of real time insets and static windows from secondary sensor sources, near real time 3-D imaging from a single sensor can be achieved, graphical information can be added, and enhancement techniques can be employed. Such increased functionality is generating a considerable amount of interest in the military and commercial markets. This paper discusses some of these image processing techniques and their applications.

  5. Diffraction-limited real-time terahertz imaging by optical frequency up-conversion in a DAST crystal.

    PubMed

    Fan, Shuzhen; Qi, Feng; Notake, Takashi; Nawata, Kouji; Takida, Yuma; Matsukawa, Takeshi; Minamide, Hiroaki

    2015-03-23

    Real-time terahertz (THz) wave imaging has wide applications in areas such as security, industry, biology, medicine, pharmacy, and the arts. This report describes real-time room-temperature THz imaging by nonlinear optical frequency up-conversion in an organic 4-dimethylamino-N'-methyl-4'-stilbazolium tosylate (DAST) crystal, with high resolution reaching the diffraction limit. THz-wave images were converted to the near infrared region and then captured using an InGaAs camera in a tandem imaging system. The resolution of the imaging system was analyzed. Diffraction and interference of THz wave were observed in the experiments. Videos are supplied to show the interference pattern variation that occurs with sample moving and tilting.

  6. In vivo imaging of human photoreceptor mosaic with wavefront sensorless adaptive optics optical coherence tomography.

    PubMed

    Wong, Kevin S K; Jian, Yifan; Cua, Michelle; Bonora, Stefano; Zawadzki, Robert J; Sarunic, Marinko V

    2015-02-01

    Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel imaging technique for in vivo high-resolution depth-resolved imaging that mitigates some of the challenges encountered with the use of sensor-based adaptive optics designs. This technique replaces the Hartmann Shack wavefront sensor used to measure aberrations with a depth-resolved image-driven optimization algorithm, with the metric based on the OCT volumes acquired in real-time. The custom-built ultrahigh-speed GPU processing platform and fast modal optimization algorithm presented in this paper was essential in enabling real-time, in vivo imaging of human retinas with wavefront sensorless AO correction. WSAO-OCT is especially advantageous for developing a clinical high-resolution retinal imaging system as it enables the use of a compact, low-cost and robust lens-based adaptive optics design. In this report, we describe our WSAO-OCT system for imaging the human photoreceptor mosaic in vivo. We validated our system performance by imaging the retina at several eccentricities, and demonstrated the improvement in photoreceptor visibility with WSAO compensation.

  7. Near Real-Time Image Reconstruction

    NASA Astrophysics Data System (ADS)

    Denker, C.; Yang, G.; Wang, H.

    2001-08-01

    In recent years, post-facto image-processing algorithms have been developed to achieve diffraction-limited observations of the solar surface. We present a combination of frame selection, speckle-masking imaging, and parallel computing which provides real-time, diffraction-limited, 256×256 pixel images at a 1-minute cadence. Our approach to achieve diffraction limited observations is complementary to adaptive optics (AO). At the moment, AO is limited by the fact that it corrects wavefront abberations only for a field of view comparable to the isoplanatic patch. This limitation does not apply to speckle-masking imaging. However, speckle-masking imaging relies on short-exposure images which limits its spectroscopic applications. The parallel processing of the data is performed on a Beowulf-class computer which utilizes off-the-shelf, mass-market technologies to provide high computational performance for scientific calculations and applications at low cost. Beowulf computers have a great potential, not only for image reconstruction, but for any kind of complex data reduction. Immediate access to high-level data products and direct visualization of dynamic processes on the Sun are two of the advantages to be gained.

  8. Transvaginal photoacoustic imaging probe and system based on a multiport fiber-optic beamsplitter and a real time imager for ovarian cancer detection

    NASA Astrophysics Data System (ADS)

    Kumavor, Patrick D.; Alqasemi, Umar; Tavakoli, Behnoosh; Li, Hai; Yang, Yi; Zhu, Quing

    2013-03-01

    This paper presents a real-time transvaginal photoacoustic imaging probe for imaging human ovaries in vivo. The probe consists of a high-throughput (up to 80%) fiber-optic 1 x 19 beamsplitters, a commercial array ultrasound transducer, and a fiber protective sheath. The beamsplitter has a 940-micron core diameter input fiber and 240-micron core diameter output fibers numbering 36. The 36 small-core output fibers surround the ultrasound transducer and delivers light to the tissue during imaging. A protective sheath, modeled in the form of the transducer using a 3-D printer, encloses the transducer with array of fibers. A real-time image acquisition system collects and processes the photoacoustic RF signals from the transducer, and displays the images formed on a monitor in real time. Additionally, the system is capable of coregistered pulse-echo ultrasound imaging. In this way, we obtain both morphological and functional information from the ovarian tissue. Photoacousitc images of malignant human ovaries taken ex vivo with the probe revealed blood vascular and networks that was distinguishable from normal ovaries, making the probe potential useful for characterizing ovarian tissue.

  9. Magnetic resonance-guided prostate interventions.

    PubMed

    Haker, Steven J; Mulkern, Robert V; Roebuck, Joseph R; Barnes, Agnieska Szot; Dimaio, Simon; Hata, Nobuhiko; Tempany, Clare M C

    2005-10-01

    We review our experience using an open 0.5-T magnetic resonance (MR) interventional unit to guide procedures in the prostate. This system allows access to the patient and real-time MR imaging simultaneously and has made it possible to perform prostate biopsy and brachytherapy under MR guidance. We review MR imaging of the prostate and its use in targeted therapy, and describe our use of image processing methods such as image registration to further facilitate precise targeting. We describe current developments with a robot assist system being developed to aid radioactive seed placement.

  10. A multimedia electronic patient record (ePR) system for image-assisted minimally invasive spinal surgery.

    PubMed

    Documet, Jorge; Le, Anh; Liu, Brent; Chiu, John; Huang, H K

    2010-05-01

    This paper presents the concept of bridging the gap between diagnostic images and image-assisted surgical treatment through the development of a one-stop multimedia electronic patient record (ePR) system that manages and distributes the real-time multimodality imaging and informatics data that assists the surgeon during all clinical phases of the operation from planning Intra-Op to post-care follow-up. We present the concept of this multimedia ePR for surgery by first focusing on image-assisted minimally invasive spinal surgery as a clinical application. Three clinical phases of minimally invasive spinal surgery workflow in Pre-Op, Intra-Op, and Post-Op are discussed. The ePR architecture was developed based on the three-phased workflow, which includes the Pre-Op, Intra-Op, and Post-Op modules and four components comprising of the input integration unit, fault-tolerant gateway server, fault-tolerant ePR server, and the visualization and display. A prototype was built and deployed to a minimally invasive spinal surgery clinical site with user training and support for daily use. A step-by-step approach was introduced to develop a multimedia ePR system for imaging-assisted minimally invasive spinal surgery that includes images, clinical forms, waveforms, and textual data for planning the surgery, two real-time imaging techniques (digital fluoroscopic, DF) and endoscope video images (Endo), and more than half a dozen live vital signs of the patient during surgery. Clinical implementation experiences and challenges were also discussed.

  11. The iQID Camera: An Ionizing-Radiation Quantum Imaging Detector

    DOE PAGES

    Miller, Brian W.; Gregory, Stephanie J.; Fuller, Erin S.; ...

    2014-06-11

    We have developed and tested a novel, ionizing-radiation Quantum Imaging Detector (iQID). This scintillation-based detector was originally developed as a high-resolution gamma-ray imager, called BazookaSPECT, for use in single-photon emission computed tomography (SPECT). Recently, we have investigated the detectors response and imaging potential with other forms of ionizing radiation including alpha, neutron, beta, and fission fragment particles. The detector’s response to a broad range of ionizing radiation has prompted its new title. The principle operation of the iQID camera involves coupling a scintillator to an image intensifier. The scintillation light generated particle interactions is optically amplified by the intensifier andmore » then re-imaged onto a CCD/CMOS camera sensor. The intensifier provides sufficient optical gain that practically any CCD/CMOS camera can be used to image ionizing radiation. Individual particles are identified and their spatial position (to sub-pixel accuracy) and energy are estimated on an event-by-event basis in real time using image analysis algorithms on high-performance graphics processing hardware. Distinguishing features of the iQID camera include portability, large active areas, high sensitivity, and high spatial resolution (tens of microns). Although modest, iQID has energy resolution that is sufficient to discrimate between particles. Additionally, spatial features of individual events can be used for particle discrimination. An important iQID imaging application that has recently been developed is single-particle, real-time digital autoradiography. In conclusion, we present the latest results and discuss potential applications.« less

  12. Preliminary analysis for integration of spot-scanning proton beam therapy and real-time imaging and gating.

    PubMed

    Shimizu, S; Matsuura, T; Umezawa, M; Hiramoto, K; Miyamoto, N; Umegaki, K; Shirato, H

    2014-07-01

    Spot-scanning proton beam therapy (PBT) can create good dose distribution for static targets. However, there exists larger uncertainty for tumors that move due to respiration, bowel gas or other internal circumstances within the patients. We have developed a real-time tumor-tracking radiation therapy (RTRT) system that uses an X-ray linear accelerator gated to the motion of internal fiducial markers introduced in the late 1990s. Relying on more than 10 years of clinical experience and big log data, we established a real-time image gated proton beam therapy system dedicated to spot scanning. Using log data and clinical outcomes derived from the clinical usage of the RTRT system since 1999, we have established a library to be used for in-house simulation for tumor targeting and evaluation. Factors considered to be the dominant causes of the interplay effects related to the spot scanning dedicated proton therapy system are listed and discussed. Total facility design, synchrotron operation cycle, and gating windows were listed as the important factors causing the interplay effects contributing to the irradiation time and motion-induced dose error. Fiducial markers that we have developed and used for the RTRT in X-ray therapy were suggested to have the capacity to improve dose distribution. Accumulated internal motion data in the RTRT system enable us to improve the operation and function of a Spot-scanning proton beam therapy (SSPT) system. A real-time-image gated SSPT system can increase accuracy for treating moving tumors. The system will start clinical service in early 2014. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  13. Preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery.

    PubMed

    Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F

    2015-04-01

    Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was < 3 mm in all cases (100 %). Neuro-navigation using intraoperative US integrated with preoperative MRI is reliable, accurate and user-friendly. Moreover, the adjustments are very helpful in correcting brain shift and tissue distortion. This integrated system allows true real-time feedback during surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and orientation. © Georg Thieme Verlag KG Stuttgart · New York.

  14. Near Real-Time Georeference of Umanned Aerial Vehicle Images for Post-Earthquake Response

    NASA Astrophysics Data System (ADS)

    Wang, S.; Wang, X.; Dou, A.; Yuan, X.; Ding, L.; Ding, X.

    2018-04-01

    The rapid collection of Unmanned Aerial Vehicle (UAV) remote sensing images plays an important role in the fast submitting disaster information and the monitored serious damaged objects after the earthquake. However, for hundreds of UAV images collected in one flight sortie, the traditional data processing methods are image stitching and three-dimensional reconstruction, which take one to several hours, and affect the speed of disaster response. If the manual searching method is employed, we will spend much more time to select the images and the find images do not have spatial reference. Therefore, a near-real-time rapid georeference method for UAV remote sensing disaster data is proposed in this paper. The UAV images are achieved georeference combined with the position and attitude data collected by UAV flight control system, and the georeferenced data is organized by means of world file which is developed by ESRI. The C # language is adopted to compile the UAV images rapid georeference software, combined with Geospatial Data Abstraction Library (GDAL). The result shows that it can realize rapid georeference of remote sensing disaster images for up to one thousand UAV images within one minute, and meets the demand of rapid disaster response, which is of great value in disaster emergency application.

  15. SU-F-J-54: Towards Real-Time Volumetric Imaging Using the Treatment Beam and KV Beam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, M; Rozario, T; Liu, A

    Purpose: Existing real-time imaging uses dual (orthogonal) kV beam fluoroscopies and may result in significant amount of extra radiation to patients, especially for prolonged treatment cases. In addition, kV projections only provide 2D information, which is insufficient for in vivo dose reconstruction. We propose real-time volumetric imaging using prior knowledge of pre-treatment 4D images and real-time 2D transit data of treatment beam and kV beam. Methods: The pre-treatment multi-snapshot volumetric images are used to simulate 2D projections of both the treatment beam and kV beam, respectively, for each treatment field defined by the control point. During radiation delivery, the transitmore » signals acquired by the electronic portal image device (EPID) are processed for every projection and compared with pre-calculation by cross-correlation for phase matching and thus 3D snapshot identification or real-time volumetric imaging. The data processing involves taking logarithmic ratios of EPID signals with respect to the air scan to reduce modeling uncertainties in head scatter fluence and EPID response. Simulated 2D projections are also used to pre-calculate confidence levels in phase matching. Treatment beam projections that have a low confidence level either in pre-calculation or real-time acquisition will trigger kV beams so that complementary information can be exploited. In case both the treatment beam and kV beam return low confidence in phase matching, a predicted phase based on linear regression will be generated. Results: Simulation studies indicated treatment beams provide sufficient confidence in phase matching for most cases. At times of low confidence from treatment beams, kV imaging provides sufficient confidence in phase matching due to its complementary configuration. Conclusion: The proposed real-time volumetric imaging utilizes the treatment beam and triggers kV beams for complementary information when the treatment beam along does not provide sufficient confidence for phase matching. This strategy minimizes the use of extra radiation to patients. This project is partially supported by a Varian MRA grant.« less

  16. Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery.

    PubMed

    Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang

    2015-09-01

    A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance.

  17. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery.

    PubMed

    Li, Ruijiang; Fahimian, Benjamin P; Xing, Lei

    2011-07-01

    Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a "plug-and-play" fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not statistically significant. The proposed algorithm eliminates the need for any population based model parameters in monoscopic image guided radiotherapy and allows accurate and real-time 3D tumor localization on current standard LINACs with a single x-ray imager.

  18. Biomimetic machine vision system.

    PubMed

    Harman, William M; Barrett, Steven F; Wright, Cameron H G; Wilcox, Michael

    2005-01-01

    Real-time application of digital imaging for use in machine vision systems has proven to be prohibitive when used within control systems that employ low-power single processors without compromising the scope of vision or resolution of captured images. Development of a real-time machine analog vision system is the focus of research taking place at the University of Wyoming. This new vision system is based upon the biological vision system of the common house fly. Development of a single sensor is accomplished, representing a single facet of the fly's eye. This new sensor is then incorporated into an array of sensors capable of detecting objects and tracking motion in 2-D space. This system "preprocesses" incoming image data resulting in minimal data processing to determine the location of a target object. Due to the nature of the sensors in the array, hyperacuity is achieved thereby eliminating resolutions issues found in digital vision systems. In this paper, we will discuss the biological traits of the fly eye and the specific traits that led to the development of this machine vision system. We will also discuss the process of developing an analog based sensor that mimics the characteristics of interest in the biological vision system. This paper will conclude with a discussion of how an array of these sensors can be applied toward solving real-world machine vision issues.

  19. Intraoperative brain hemodynamic response assessment with real-time hyperspectral optical imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Laurence, Audrey; Pichette, Julien; Angulo-Rodríguez, Leticia M.; Saint Pierre, Catherine; Lesage, Frédéric; Bouthillier, Alain; Nguyen, Dang Khoa; Leblond, Frédéric

    2016-03-01

    Following normal neuronal activity, there is an increase in cerebral blood flow and cerebral blood volume to provide oxygenated hemoglobin to active neurons. For abnormal activity such as epileptiform discharges, this hemodynamic response may be inadequate to meet the high metabolic demands. To verify this hypothesis, we developed a novel hyperspectral imaging system able to monitor real-time cortical hemodynamic changes during brain surgery. The imaging system is directly integrated into a surgical microscope, using the white-light source for illumination. A snapshot hyperspectral camera is used for detection (4x4 mosaic filter array detecting 16 wavelengths simultaneously). We present calibration experiments where phantoms made of intralipid and food dyes were imaged. Relative concentrations of three dyes were recovered at a video rate of 30 frames per second. We also present hyperspectral recordings during brain surgery of epileptic patients with concurrent electrocorticography recordings. Relative concentration maps of oxygenated and deoxygenated hemoglobin were extracted from the data, allowing real-time studies of hemodynamic changes with a good spatial resolution. Finally, we present preliminary results on phantoms obtained with an integrated spatial frequency domain imaging system to recover tissue optical properties. This additional module, used together with the hyperspectral imaging system, will allow quantification of hemoglobin concentrations maps. Our hyperspectral imaging system offers a new tool to analyze hemodynamic changes, especially in the case of epileptiform discharges. It also offers an opportunity to study brain connectivity by analyzing correlations between hemodynamic responses of different tissue regions.

  20. A Low-Cost Tele-Imaging Platform for Developing Countries

    PubMed Central

    Adambounou, Kokou; Adjenou, Victor; Salam, Alex P.; Farin, Fabien; N’Dakena, Koffi Gilbert; Gbeassor, Messanvi; Arbeille, Philippe

    2014-01-01

    Purpose: To design a “low-cost” tele-imaging method allowing real-time tele-ultrasound expertise, delayed tele-ultrasound diagnosis, and tele-radiology between remote peripherals hospitals and clinics (patient centers) and university hospital centers (expert center). Materials and methods: A system of communication via internet (IP camera and remote access software) enabling transfer of ultrasound videos and images between two centers allows a real-time tele-radiology expertise in the presence of a junior sonographer or radiologist at the patient center. In the absence of a sonographer or radiologist at the patient center, a 3D reconstruction program allows a delayed tele-ultrasound diagnosis with images acquired by a lay operator (e.g., midwife, nurse, technician). The system was tested both with high and low bandwidth. The system can further accommodate non-ultrasound tele-radiology (conventional radiography, mammography, and computer tomography for example). The system was tested on 50 patients between CHR Tsevie in Togo (40 km from Lomé-Togo and 4500 km from Tours-France) and CHU Campus at Lomé and CHU Trousseau in Tours. Results: A real-time tele-expertise was successfully performed with a delay of approximately 1.5 s with an internet bandwidth of around 1 Mbps (IP Camera) and 512 kbps (remote access software). A delayed tele-ultrasound diagnosis was also performed with satisfactory results. The transmission of radiological images from the patient center to the expert center was of adequate quality. Delayed tele-ultrasound and tele-radiology was possible even in the presence of a low-bandwidth internet connection. Conclusion: This tele-imaging method, requiring nothing by readily available and inexpensive technology and equipment, offers a major opportunity for telemedicine in developing countries. PMID:25250306

  1. Synthetic Foveal Imaging Technology

    NASA Technical Reports Server (NTRS)

    Hoenk, Michael; Monacos, Steve; Nikzad, Shouleh

    2009-01-01

    Synthetic Foveal imaging Technology (SyFT) is an emerging discipline of image capture and image-data processing that offers the prospect of greatly increased capabilities for real-time processing of large, high-resolution images (including mosaic images) for such purposes as automated recognition and tracking of moving objects of interest. SyFT offers a solution to the image-data processing problem arising from the proposed development of gigapixel mosaic focal-plane image-detector assemblies for very wide field-of-view imaging with high resolution for detecting and tracking sparse objects or events within narrow subfields of view. In order to identify and track the objects or events without the means of dynamic adaptation to be afforded by SyFT, it would be necessary to post-process data from an image-data space consisting of terabytes of data. Such post-processing would be time-consuming and, as a consequence, could result in missing significant events that could not be observed at all due to the time evolution of such events or could not be observed at required levels of fidelity without such real-time adaptations as adjusting focal-plane operating conditions or aiming of the focal plane in different directions to track such events. The basic concept of foveal imaging is straightforward: In imitation of a natural eye, a foveal-vision image sensor is designed to offer higher resolution in a small region of interest (ROI) within its field of view. Foveal vision reduces the amount of unwanted information that must be transferred from the image sensor to external image-data-processing circuitry. The aforementioned basic concept is not new in itself: indeed, image sensors based on these concepts have been described in several previous NASA Tech Briefs articles. Active-pixel integrated-circuit image sensors that can be programmed in real time to effect foveal artificial vision on demand are one such example. What is new in SyFT is a synergistic combination of recent advances in foveal imaging, computing, and related fields, along with a generalization of the basic foveal-vision concept to admit a synthetic fovea that is not restricted to one contiguous region of an image.

  2. Electron-bombarded CCD detectors for ultraviolet atmospheric remote sensing

    NASA Technical Reports Server (NTRS)

    Carruthers, G. R.; Opal, C. B.

    1983-01-01

    Electronic image sensors based on charge coupled devices operated in electron-bombarded mode, yielding real-time, remote-readout, photon-limited UV imaging capability are being developed. The sensors also incorporate fast-focal-ratio Schmidt optics and opaque photocathodes, giving nearly the ultimate possible diffuse-source sensitivity. They can be used for direct imagery of atmospheric emission phenomena, and for imaging spectrography with moderate spatial and spectral resolution. The current state of instrument development, laboratory results, planned future developments and proposed applications of the sensors in space flight instrumentation is described.

  3. 1.0 T open-configuration magnetic resonance-guided microwave ablation of pig livers in real time

    PubMed Central

    Dong, Jun; Zhang, Liang; Li, Wang; Mao, Siyue; Wang, Yiqi; Wang, Deling; Shen, Lujun; Dong, Annan; Wu, Peihong

    2015-01-01

    The current fastest frame rate of each single image slice in MR-guided ablation is 1.3 seconds, which means delayed imaging for human at an average reaction time: 0.33 seconds. The delayed imaging greatly limits the accuracy of puncture and ablation, and results in puncture injury or incomplete ablation. To overcome delayed imaging and obtain real-time imaging, the study was performed using a 1.0-T whole-body open configuration MR scanner in the livers of 10 Wuzhishan pigs. A respiratory-triggered liver matrix array was explored to guide and monitor microwave ablation in real-time. We successfully performed the entire ablation procedure under MR real-time guidance at 0.202 s, the fastest frame rate for each single image slice. The puncture time ranged from 23 min to 3 min. For the pigs, the mean puncture time was shorted to 4.75 minutes and the mean ablation time was 11.25 minutes at power 70 W. The mean length and widths were 4.62 ± 0.24 cm and 2.64 ± 0.13 cm, respectively. No complications or ablation related deaths during or after ablation were observed. In the current study, MR is able to guide microwave ablation like ultrasound in real-time guidance showing great potential for the treatment of liver tumors. PMID:26315365

  4. Emergency product generation for disaster management using RISAT and DMSAR quick look SAR processors

    NASA Astrophysics Data System (ADS)

    Desai, Nilesh; Sharma, Ritesh; Kumar, Saravana; Misra, Tapan; Gujraty, Virendra; Rana, SurinderSingh

    2006-12-01

    Since last few years, ISRO has embarked upon the development of two complex Synthetic Aperture Radar (SAR) missions, viz. Spaceborne Radar Imaging Satellite (RISAT) and Airborne SAR for Disaster Mangement (DMSAR), as a capacity building measure under country's Disaster Management Support (DMS) Program, for estimating the extent of damage over large areas (~75 Km) and also assess the effectiveness of the relief measures undertaken during natural disasters such as cyclones, epidemics, earthquakes, floods and landslides, forest fires, crop diseases etc. Synthetic Aperture Radar (SAR) has an unique role to play in mapping and monitoring of large areas affected by natural disasters especially floods, owing to its unique capability to see through clouds as well as all-weather imaging capability. The generation of SAR images with quick turn around time is very essential to meet the above DMS objectives. Thus the development of SAR Processors, for these two SAR systems poses considerable challenges and design efforts. Considering the growing user demand and inevitable necessity for a full-fledged high throughput processor, to process SAR data and generate image in real or near-real time, the design and development of a generic SAR Processor has been taken up and evolved, which will meet the SAR processing requirements for both Airborne and Spaceborne SAR systems. This hardware SAR processor is being built, to the extent possible, using only Commercial-Off-The-Shelf (COTS) DSP and other hardware plug-in modules on a Compact PCI (cPCI) platform. Thus, the major thrust has been on working out Multi-processor Digital Signal Processor (DSP) architecture and algorithm development and optimization rather than hardware design and fabrication. For DMSAR, this generic SAR Processor operates as a Quick Look SAR Processor (QLP) on-board the aircraft to produce real time full swath DMSAR images and as a ground based Near-Real Time high precision full swath Processor (NRTP). It will generate full-swath (6 to 75 Kms) DMSAR images in 1m / 3m / 5m / 10m / 30m resolution SAR operating modes. For RISAT mission, this generic Quick Look SAR Processor will be mainly used for browse product generation at NRSA-Shadnagar (SAN) ground receive station. RISAT QLP/NRTP is also proposed to provide an alternative emergency SAR product generation chain. For this, the S/C aux data appended in Onboard SAR Frame Format (x, y, z, x', y', z', roll, pitch, yaw) and predicted orbit from previous days Orbit Determination data will be used. The QLP / NRTP will produce ground range images in real / near real time. For emergency data product generation, additional Off-line tasks like geo-tagging, masking, QC etc needs to be performed on the processed image. The QLP / NRTP would generate geo-tagged images from the annotation data available from the SAR P/L data itself. Since the orbit & attitude information are taken as it is, the location accuracy will be poorer compared to the product generated using ADIF, where smoothened attitude and orbit are made available. Additional tasks like masking, output formatting and Quality checking of the data product will be carried out at Balanagar, NRSA after the image annotated data from QLP / NRTP is sent to Balanagar. The necessary interfaces to the QLP/NRTP for Emergency product generation are also being worked out. As is widely acknowledged, QLP/NRTP for RISAT and DMSAR is an ambitious effort and the technology of future. It is expected that by the middle of next decade, the next generation SAR missions worldwide will have onboard SAR Processors of varying capabilities and generate SAR Data products and Information products onboard instead of SAR raw data. Thus, it is also envisaged that these activities related to QLP/NRTP implementation for RISAT ground segment and DMSAR will be a significant step which will directly feed into the development of onboard real time processing systems for ISRO's future space borne SAR missions. This paper describes the design requirements, configuration details and salient features, apart from highlighting the utility of these Quick Look SAR processors for RISAT and DMSAR, for generation of emergency products for Disaster management.

  5. Development of an in vitro diaphragm motion reproduction system.

    PubMed

    Liao, Ai-Ho; Chuang, Ho-Chiao; Shih, Ming-Chih; Hsu, Hsiao-Yu; Tien, Der-Chi; Kuo, Chia-Chun; Jeng, Shiu-Chen; Chiou, Jeng-Fong

    2017-07-01

    This study developed an in vitro diaphragm motion reproduction system (IVDMRS) based on noninvasive and real-time ultrasound imaging to track the internal displacement of the human diaphragm and diaphragm phantoms with a respiration simulation system (RSS). An ultrasound image tracking algorithm (UITA) was used to retrieve the displacement data of the tracking target and reproduce the diaphragm motion in real time using a red laser to irradiate the diaphragm phantom in vitro. This study also recorded the respiration patterns in 10 volunteers. Both simulated and the respiration patterns in 10 human volunteers signals were input to the RSS for conducting experiments involving the reproduction of diaphragm motion in vitro using the IVDMRS. The reproduction accuracy of the IVDMRS was calculated and analyzed. The results indicate that the respiration frequency substantially affects the correlation between ultrasound and kV images, as well as the reproduction accuracy of the IVDMRS due to the system delay time (0.35s) of ultrasound imaging and signal transmission. The utilization of a phase lead compensator (PLC) reduced the error caused by this delay, thereby improving the reproduction accuracy of the IVDMRS by 14.09-46.98%. Applying the IVDMRS in clinical treatments will allow medical staff to monitor the target displacements in real time by observing the movement of the laser beam. If the target displacement moves outside the planning target volume (PTV), the treatment can be immediately stopped to ensure that healthy tissues do not receive high doses of radiation. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  6. Development of intelligent surveillance system (ISS) in region of interest (ROI) using Kalman filter and camshift on Raspberry Pi 2

    NASA Astrophysics Data System (ADS)

    Park, Junghun; Hong, Kicheon

    2017-06-01

    Due to the improvement of the picture quality of closed-circuit television (CCTV), the demand for CCTV has increased rapidly and its market size has also increased. The current system structure of CCTV transfers compressed images without analysis received from CCTV to a control center. The compressed images are suitable for the evidence required for a criminal arrest, but they cannot prevent crime in real time, which has been considered a limitation. Thus, the present paper proposes a system implementation that can prevent crimes by applying a situation awareness system at the back end of the CCTV cameras for image acquisition to prevent crimes efficiently. In the system implemented in the present paper, the region of interest (ROI) is set virtually within the image data when a barrier, such as fence, cannot be installed in actual sites and unauthorized intruders are tracked constantly through data analysis and recognized in the ROI via the developed algorithm. Additionally, a searchlight or alarm sound is activated to prevent crime in real time and the urgent information is transferred to the control center. The system was implemented in the Raspberry Pi 2 board to be run in real time. The experiment results showed that the recognition success rate was 85% or higher and the track accuracy was 90% or higher. By utilizing the system, crime prevention can be achieved by implementing a social safety network.

  7. CRionScan: A stand-alone real time controller designed to perform ion beam imaging, dose controlled irradiation and proton beam writing

    NASA Astrophysics Data System (ADS)

    Daudin, L.; Barberet, Ph.; Serani, L.; Moretto, Ph.

    2013-07-01

    High resolution ion microbeams, usually used to perform elemental mapping, low dose targeted irradiation or ion beam lithography needs a very flexible beam control system. For this purpose, we have developed a dedicated system (called “CRionScan”), on the AIFIRA facility (Applications Interdisciplinaires des Faisceaux d'Ions en Région Aquitaine). It consists of a stand-alone real-time scanning and imaging instrument based on a Compact Reconfigurable Input/Output (Compact RIO) device from National Instruments™. It is based on a real-time controller, a Field Programmable Gate Array (FPGA), input/output modules and Ethernet connectivity. We have implemented a fast and deterministic beam scanning system interfaced with our commercial data acquisition system without any hardware development. CRionScan is built under LabVIEW™ and has been used on AIFIRA's nanobeam line since 2009 (Barberet et al., 2009, 2011) [1,2]. A Graphical User Interface (GUI) embedded in the Compact RIO as a web page is used to control the scanning parameters. In addition, a fast electrostatic beam blanking trigger has been included in the FPGA and high speed counters (15 MHz) have been implemented to perform dose controlled irradiation and on-line images on the GUI. Analog to Digital converters are used for the beam current measurement and in the near future for secondary electrons imaging. Other functionalities have been integrated in this controller like LED lighting using Pulse Width Modulation and a “NIM Wilkinson ADC” data acquisition.

  8. Quantitative real-time analysis of collective cancer invasion and dissemination

    NASA Astrophysics Data System (ADS)

    Ewald, Andrew J.

    2015-05-01

    A grand challenge in biology is to understand the cellular and molecular basis of tissue and organ level function in mammals. The ultimate goals of such efforts are to explain how organs arise in development from the coordinated actions of their constituent cells and to determine how molecularly regulated changes in cell behavior alter the structure and function of organs during disease processes. Two major barriers stand in the way of achieving these goals: the relative inaccessibility of cellular processes in mammals and the daunting complexity of the signaling environment inside an intact organ in vivo. To overcome these barriers, we have developed a suite of tissue isolation, three dimensional (3D) culture, genetic manipulation, nanobiomaterials, imaging, and molecular analysis techniques to enable the real-time study of cell biology within intact tissues in physiologically relevant 3D environments. This manuscript introduces the rationale for 3D culture, reviews challenges to optical imaging in these cultures, and identifies current limitations in the analysis of complex experimental designs that could be overcome with improved imaging, imaging analysis, and automated classification of the results of experimental interventions.

  9. MIRIADS: miniature infrared imaging applications development system description and operation

    NASA Astrophysics Data System (ADS)

    Baxter, Christopher R.; Massie, Mark A.; McCarley, Paul L.; Couture, Michael E.

    2001-10-01

    A cooperative effort between the U.S. Air Force Research Laboratory, Nova Research, Inc., the Raytheon Infrared Operations (RIO) and Optics 1, Inc. has successfully produced a miniature infrared camera system that offers significant real-time signal and image processing capabilities by virtue of its modular design. This paper will present an operational overview of the system as well as results from initial testing of the 'Modular Infrared Imaging Applications Development System' (MIRIADS) configured as a missile early-warning detection system. The MIRIADS device can operate virtually any infrared focal plane array (FPA) that currently exists. Programmable on-board logic applies user-defined processing functions to the real-time digital image data for a variety of functions. Daughterboards may be plugged onto the system to expand the digital and analog processing capabilities of the system. A unique full hemispherical infrared fisheye optical system designed and produced by Optics 1, Inc. is utilized by the MIRIADS in a missile warning application to demonstrate the flexibility of the overall system to be applied to a variety of current and future AFRL missions.

  10. Real-time global MHD simulation of the solar wind interaction with the earth's magnetosphere

    NASA Astrophysics Data System (ADS)

    Shimazu, H.; Tanaka, T.; Fujita, S.; Nakamura, M.; Obara, T.

    We have developed a real-time global MHD simulation of the solar wind interaction with the earth s magnetosphere By adopting the real-time solar wind parameters including the IMF observed routinely by the ACE spacecraft responses of the magnetosphere are calculated with the MHD code We adopted the modified spherical coordinates and the mesh point numbers for this simulation are 56 58 and 40 for the r theta and phi direction respectively The simulation is carried out routinely on the super computer system NEC SX-6 at National Institute of Information and Communications Technology Japan The visualized images of the magnetic field lines around the earth pressure distribution on the meridian plane and the conductivity of the polar ionosphere can be referred to on the Web site http www nict go jp dk c232 realtime The results show that various magnetospheric activities are almost reproduced qualitatively They also give us information how geomagnetic disturbances develop in the magnetosphere in relation with the ionosphere From the viewpoint of space weather the real-time simulation helps us to understand the whole image in the current condition of the magnetosphere To evaluate the simulation results we compare the AE index derived from the simulation and observations In the case of isolated substorms the indices almost agreed well in both timing and intensities In other cases the simulation can predict general activities although the exact timing of the onset of substorms and intensities did not always agree By analyzing

  11. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  12. Rational Design and Synthesis of γFe2 O3 @Au Magnetic Gold Nanoflowers for Efficient Cancer Theranostics.

    PubMed

    Huang, Jie; Guo, Miao; Ke, Hengte; Zong, Cheng; Ren, Bin; Liu, Gang; Shen, He; Ma, Yufei; Wang, Xiaoyong; Zhang, Hailu; Deng, Zongwu; Chen, Huabing; Zhang, Zhijun

    2015-09-09

    An γFe2 O3 @Au core/shell-type magnetic gold nanoflower-based theranostic nano-platform is developed. It is integrated with ultrasensitive surface-enhanced Raman scattering imaging, high-resolution photo-acoustics imaging, real-time magnetic resonance imaging, and photothermal therapy capabilities. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) imagesmore » at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a gradient-based similarity measure. Finally, if needed, they obtain the position information of the liver lesion using the 3D preoperative image to which the registered 2D preoperative slice belongs. Results: The proposed method was applied to 23 clinical datasets and quantitative evaluations were conducted. With the exception of one clinical dataset that included US images of extremely low quality, 22 datasets of various liver status were successfully applied in the evaluation. Experimental results showed that the registration error between the anatomical features of US and preoperative MR images is less than 3 mm on average. The lesion tracking error was also found to be less than 5 mm at maximum. Conclusions: A new system has been proposed for real-time registration between 2D US and successive multiple 3D preoperative MR/CT images of the liver and was applied for indirect lesion tracking for image-guided intervention. The system is fully automatic and robust even with images that had low quality due to patient status. Through visual examinations and quantitative evaluations, it was verified that the proposed system can provide high lesion tracking accuracy as well as high registration accuracy, at performance levels which were acceptable for various clinical applications.« less

  14. Evaluation of an image-based tracking workflow using a passive marker and resonant micro-coil fiducials for automatic image plane alignment in interventional MRI.

    PubMed

    Neumann, M; Breton, E; Cuvillon, L; Pan, L; Lorenz, C H; de Mathelin, M

    2012-01-01

    In this paper, an original workflow is presented for MR image plane alignment based on tracking in real-time MR images. A test device consisting of two resonant micro-coils and a passive marker is proposed for detection using image-based algorithms. Micro-coils allow for automated initialization of the object detection in dedicated low flip angle projection images; then the passive marker is tracked in clinical real-time MR images, with alternation between two oblique orthogonal image planes along the test device axis; in case the passive marker is lost in real-time images, the workflow is reinitialized. The proposed workflow was designed to minimize dedicated acquisition time to a single dedicated acquisition in the ideal case (no reinitialization required). First experiments have shown promising results for test-device tracking precision, with a mean position error of 0.79 mm and a mean orientation error of 0.24°.

  15. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  16. Ultrasound- and MRI-Guided Prostate Biopsy

    MedlinePlus

    ... which the MR images are fused with the real-time ultrasound images — an approach known as MRI/TRUS ... by a computer, which in turn creates a real-time picture on the monitor. One or more frames ...

  17. Real-time terahertz wave imaging by nonlinear optical frequency up-conversion in a 4-dimethylamino-N'-methyl-4'-stilbazolium tosylate crystal

    NASA Astrophysics Data System (ADS)

    Fan, Shuzhen; Qi, Feng; Notake, Takashi; Nawata, Kouji; Matsukawa, Takeshi; Takida, Yuma; Minamide, Hiroaki

    2014-03-01

    Real-time terahertz (THz) wave imaging has wide applications in areas such as security, industry, biology, medicine, pharmacy, and arts. In this letter, we report on real-time room-temperature THz imaging by nonlinear optical frequency up-conversion in organic 4-dimethylamino-N'-methyl-4'-stilbazolium tosylate crystal. The active projection-imaging system consisted of (1) THz wave generation, (2) THz-near-infrared hybrid optics, (3) THz wave up-conversion, and (4) an InGaAs camera working at 60 frames per second. The pumping laser system consisted of two optical parametric oscillators pumped by a nano-second frequency-doubled Nd:YAG laser. THz-wave images of handmade samples at 19.3 THz were taken, and videos of a sample moving and a ruler stuck with a black polyethylene film moving were supplied online to show real-time ability. Thanks to the high speed and high responsivity of this technology, real-time THz imaging with a higher signal-to-noise ratio than a commercially available THz micro-bolometer camera was proven to be feasible. By changing the phase-matching condition, i.e., by changing the wavelength of the pumping laser, we suggest THz imaging with a narrow THz frequency band of interest in a wide range from approximately 2 to 30 THz is possible.

  18. A GPU-Parallelized Eigen-Based Clutter Filter Framework for Ultrasound Color Flow Imaging.

    PubMed

    Chee, Adrian J Y; Yiu, Billy Y S; Yu, Alfred C H

    2017-01-01

    Eigen-filters with attenuation response adapted to clutter statistics in color flow imaging (CFI) have shown improved flow detection sensitivity in the presence of tissue motion. Nevertheless, its practical adoption in clinical use is not straightforward due to the high computational cost for solving eigendecompositions. Here, we provide a pedagogical description of how a real-time computing framework for eigen-based clutter filtering can be developed through a single-instruction, multiple data (SIMD) computing approach that can be implemented on a graphical processing unit (GPU). Emphasis is placed on the single-ensemble-based eigen-filtering approach (Hankel singular value decomposition), since it is algorithmically compatible with GPU-based SIMD computing. The key algebraic principles and the corresponding SIMD algorithm are explained, and annotations on how such algorithm can be rationally implemented on the GPU are presented. Real-time efficacy of our framework was experimentally investigated on a single GPU device (GTX Titan X), and the computing throughput for varying scan depths and slow-time ensemble lengths was studied. Using our eigen-processing framework, real-time video-range throughput (24 frames/s) can be attained for CFI frames with full view in azimuth direction (128 scanlines), up to a scan depth of 5 cm ( λ pixel axial spacing) for slow-time ensemble length of 16 samples. The corresponding CFI image frames, with respect to the ones derived from non-adaptive polynomial regression clutter filtering, yielded enhanced flow detection sensitivity in vivo, as demonstrated in a carotid imaging case example. These findings indicate that the GPU-enabled eigen-based clutter filtering can improve CFI flow detection performance in real time.

  19. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    NASA Astrophysics Data System (ADS)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  20. Real-Time Nonlinear Optical Information Processing.

    DTIC Science & Technology

    1979-06-01

    operations aree presented. One approach realizes the halftone method of nonlinear optical processing in real time by replacing the conventional...photographic recording medium with a real-time image transducer. In the second approach halftoning is eliminated and the real-time device is used directly

  1. Intuitive ultrasonography for autonomous medical care in limited-resource environments

    NASA Astrophysics Data System (ADS)

    Dulchavsky, Scott A.; Sargsyan, Ashot E.; Garcia, Kathleen M.; Melton, Shannon L.; Ebert, Douglas; Hamilton, Douglas R.

    2011-05-01

    Management of health problems in limited resource environments, including spaceflight, faces challenges in both available equipment and personnel. The medical support for spaceflight outside Low Earth Orbit is still being defined; ultrasound (US) imaging is a candidate since trials on the International Space Station (ISS) prove that this highly informative modality performs very well in spaceflight. Considering existing estimates, authors find that US could be useful in most potential medical problems, as a powerful factor to mitigate risks and protect mission. Using outcome-oriented approach, an intuitive and adaptive US image catalog is being developed that can couple with just-in-time training methods already in use, to allow non-expert crew to autonomously acquire and interpret US data for research or diagnosis. The first objective of this work is to summarize the experience in providing imaging expertise from a central location in real time, enabling data collection by a minimally trained operator onsite. In previous investigations, just-in-time training was combined with real-time expert guidance to allow non-physician astronauts to perform over 80 h of complex US examinations on ISS, including abdominal, cardiovascular, ocular, musculoskeletal, dental/sinus, and thoracic exams. The analysis of these events shows that non-physician crew-members, after minimal training, can perform complex, quality US examinations. These training and guidance methods were also adapted for terrestrial use in professional sporting venues, the Olympic Games, and for austere locations including Mt. Everest. The second objective is to introduce a new imaging support system under development that is based on a digital catalog of existing sample images, complete with image recognition and acquisition logic and technique, and interactive multimedia reference tools, to guide and support autonomous acquisition, and possibly interpretation, of images without real-time link with a human expert. In other words, we are attempting to replace, to the extent possible, expert guidance by guidance from a digital information resource. This is a next logical phase of the authors' sustained effort to make US imaging available to sites lacking proper expertise. This effort will benefit NASA as the agency plans to develop future human exploration programs requiring increased medical autonomy. The new system will be readily adaptable to terrestrial medicine including emergency, rural, and military applications.

  2. Emerging fiber optic endomicroscopy technologies towards noninvasive real-time visualization of histology in situ

    NASA Astrophysics Data System (ADS)

    Xi, Jiefeng; Zhang, Yuying; Huo, Li; Chen, Yongping; Jabbour, Toufic; Li, Ming-Jun; Li, Xingde

    2010-09-01

    This paper reviews our recent developments of ultrathin fiber-optic endomicroscopy technologies for transforming high-resolution noninvasive optical imaging techniques to in vivo and clinical applications such as early disease detection and guidance of interventions. Specifically we describe an all-fiber-optic scanning endomicroscopy technology, which miniaturizes a conventional bench-top scanning laser microscope down to a flexible fiber-optic probe of a small footprint (i.e. ~2-2.5 mm in diameter), capable of performing two-photon fluorescence and second harmonic generation microscopy in real time. This technology aims to enable realtime visualization of histology in situ without the need for tissue removal. We will also present a balloon OCT endoscopy technology which permits high-resolution 3D imaging of the entire esophagus for detection of neoplasia, guidance of biopsy and assessment of therapeutic outcome. In addition we will discuss the development of functional polymeric fluorescent nanocapsules, which use only FAD approved materials and potentially enable fast track clinical translation of optical molecular imaging and targeted therapy.

  3. Real-time broadband terahertz spectroscopic imaging by using a high-sensitivity terahertz camera

    NASA Astrophysics Data System (ADS)

    Kanda, Natsuki; Konishi, Kuniaki; Nemoto, Natsuki; Midorikawa, Katsumi; Kuwata-Gonokami, Makoto

    2017-02-01

    Terahertz (THz) imaging has a strong potential for applications because many molecules have fingerprint spectra in this frequency region. Spectroscopic imaging in the THz region is a promising technique to fully exploit this characteristic. However, the performance of conventional techniques is restricted by the requirement of multidimensional scanning, which implies an image data acquisition time of several minutes. In this study, we propose and demonstrate a novel broadband THz spectroscopic imaging method that enables real-time image acquisition using a high-sensitivity THz camera. By exploiting the two-dimensionality of the detector, a broadband multi-channel spectrometer near 1 THz was constructed with a reflection type diffraction grating and a high-power THz source. To demonstrate the advantages of the developed technique, we performed molecule-specific imaging and high-speed acquisition of two-dimensional (2D) images. Two different sugar molecules (lactose and D-fructose) were identified with fingerprint spectra, and their distributions in one-dimensional space were obtained at a fast video rate (15 frames per second). Combined with the one-dimensional (1D) mechanical scanning of the sample, two-dimensional molecule-specific images can be obtained only in a few seconds. Our method can be applied in various important fields such as security and biomedicine.

  4. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    PubMed

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  5. Real-time imaging of specific genomic loci in eukaryotic cells using the ANCHOR DNA labelling system.

    PubMed

    Germier, Thomas; Sylvain, Audibert; Silvia, Kocanova; David, Lane; Kerstin, Bystricky

    2018-06-01

    Spatio-temporal organization of the cell nucleus adapts to and regulates genomic processes. Microscopy approaches that enable direct monitoring of specific chromatin sites in single cells and in real time are needed to better understand the dynamics involved. In this chapter, we describe the principle and development of ANCHOR, a novel tool for DNA labelling in eukaryotic cells. Protocols for use of ANCHOR to visualize a single genomic locus in eukaryotic cells are presented. We describe an approach for live cell imaging of a DNA locus during the entire cell cycle in human breast cancer cells. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Evaluation of hyperpolarized [1-¹³C]-pyruvate by magnetic resonance to detect ionizing radiation effects in real time.

    PubMed

    Sandulache, Vlad C; Chen, Yunyun; Lee, Jaehyuk; Rubinstein, Ashley; Ramirez, Marc S; Skinner, Heath D; Walker, Christopher M; Williams, Michelle D; Tailor, Ramesh; Court, Laurence E; Bankson, James A; Lai, Stephen Y

    2014-01-01

    Ionizing radiation (IR) cytotoxicity is primarily mediated through reactive oxygen species (ROS). Since tumor cells neutralize ROS by utilizing reducing equivalents, we hypothesized that measurements of reducing potential using real-time hyperpolarized (HP) magnetic resonance spectroscopy (MRS) and spectroscopic imaging (MRSI) can serve as a surrogate marker of IR induced ROS. This hypothesis was tested in a pre-clinical model of anaplastic thyroid carcinoma (ATC), an aggressive head and neck malignancy. Human ATC cell lines were utilized to test IR effects on ROS and reducing potential in vitro and [1-¹³C] pyruvate HP-MRS/MRSI imaging of ATC orthotopic xenografts was used to study in vivo effects of IR. IR increased ATC intra-cellular ROS levels resulting in a corresponding decrease in reducing equivalent levels. Exogenous manipulation of cellular ROS and reducing equivalent levels altered ATC radiosensitivity in a predictable manner. Irradiation of ATC xenografts resulted in an acute drop in reducing potential measured using HP-MRS, reflecting the shunting of reducing equivalents towards ROS neutralization. Residual tumor tissue post irradiation demonstrated heterogeneous viability. We have adapted HP-MRS/MRSI to non-invasively measure IR mediated changes in tumor reducing potential in real time. Continued development of this technology could facilitate the development of an adaptive clinical algorithm based on real-time adjustments in IR dose and dose mapping.

  7. Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor.

    PubMed

    Lee, Hyung-Chul; Jung, Chul-Woo; Kim, Hee Chan

    2017-01-01

    The discrepancy between spatial orientations of an endoscopic image and a physician's working environment can make it difficult to interpret endoscopic images. In this study, we developed and evaluated a device that corrects the endoscopic image orientation using an accelerometer and gyrosensor. The acceleration of gravity and angular velocity were retrieved from the accelerometer and gyrosensor attached to the handle of the endoscope. The rotational angle of the endoscope handle was calculated using a Kalman filter with transmission delay compensation. Technical evaluation of the orientation correction system was performed using a camera by comparing the optical rotational angle from the captured image with the rotational angle calculated from the sensor outputs. For the clinical utility test, fifteen anesthesiology residents performed a video endoscopic examination of an airway model with and without using the orientation correction system. The participants reported numbers written on papers placed at the left main, right main, and right upper bronchi of the airway model. The correctness and the total time it took participants to report the numbers were recorded. During the technical evaluation, errors in the calculated rotational angle were less than 5 degrees. In the clinical utility test, there was a significant time reduction when using the orientation correction system compared with not using the system (median, 52 vs. 76 seconds; P = .012). In this study, we developed a real-time endoscopic image orientation correction system, which significantly improved physician performance during a video endoscopic exam.

  8. Real-Time Visualization of Tissue Ischemia

    NASA Technical Reports Server (NTRS)

    Bearman, Gregory H. (Inventor); Chrien, Thomas D. (Inventor); Eastwood, Michael L. (Inventor)

    2000-01-01

    A real-time display of tissue ischemia which comprises three CCD video cameras, each with a narrow bandwidth filter at the correct wavelength is discussed. The cameras simultaneously view an area of tissue suspected of having ischemic areas through beamsplitters. The output from each camera is adjusted to give the correct signal intensity for combining with, the others into an image for display. If necessary a digital signal processor (DSP) can implement algorithms for image enhancement prior to display. Current DSP engines are fast enough to give real-time display. Measurement at three, wavelengths, combined into a real-time Red-Green-Blue (RGB) video display with a digital signal processing (DSP) board to implement image algorithms, provides direct visualization of ischemic areas.

  9. Real time magnetic resonance guided endomyocardial local delivery

    PubMed Central

    Corti, R; Badimon, J; Mizsei, G; Macaluso, F; Lee, M; Licato, P; Viles-Gonzalez, J F; Fuster, V; Sherman, W

    2005-01-01

    Objective: To investigate the feasibility of targeting various areas of left ventricle myocardium under real time magnetic resonance (MR) imaging with a customised injection catheter equipped with a miniaturised coil. Design: A needle injection catheter with a mounted resonant solenoid circuit (coil) at its tip was designed and constructed. A 1.5 T MR scanner with customised real time sequence combined with in-room scan running capabilities was used. With this system, various myocardial areas within the left ventricle were targeted and injected with a gadolinium-diethylenetriaminepentaacetic acid (DTPA) and Indian ink mixture. Results: Real time sequencing at 10 frames/s allowed clear visualisation of the moving catheter and its transit through the aorta into the ventricle, as well as targeting of all ventricle wall segments without further image enhancement techniques. All injections were visualised by real time MR imaging and verified by gross pathology. Conclusion: The tracking device allowed real time in vivo visualisation of catheters in the aorta and left ventricle as well as precise targeting of myocardial areas. The use of this real time catheter tracking may enable precise and adequate delivery of agents for tissue regeneration. PMID:15710717

  10. Real-time millimeter-wave imaging radiometer for avionic synthetic vision

    NASA Astrophysics Data System (ADS)

    Lovberg, John A.; Chou, Ri-Chee; Martin, Christopher A.

    1994-07-01

    ThermoTrex Corporation (TTC) has developed an imaging radiometer, the passive microwave camera (PMC), that uses an array of frequency-scanned antennas coupled to a multi-channel acousto-optic (Bragg cell) spectrum analyzer to form visible images of a scene through acquisition of thermal blackbody radiation in the millimeter-wave spectrum. The output of the Bragg cell is imaged by a standard video camera and passed to a computer for normalization and display at real-time frame rates. One application of this system could be its incorporation into an enhanced vision system to provide pilots with a clear view of the runway during fog and other adverse weather conditions. The unique PMC system architecture will allow compact large-aperture implementations because of its flat antenna sensor. Other potential applications include air traffic control, all-weather area surveillance, fire detection, and security. This paper describes the architecture of the TTC PMC and shows examples of images acquired with the system.

  11. Fast optically sectioned fluorescence HiLo endomicroscopy

    PubMed Central

    Lim, Daryl; Mertz, Jerome

    2012-01-01

    Abstract. We describe a nonscanning, fiber bundle endomicroscope that performs optically sectioned fluorescence imaging with fast frame rates and real-time processing. Our sectioning technique is based on HiLo imaging, wherein two widefield images are acquired under uniform and structured illumination and numerically processed to reject out-of-focus background. This work is an improvement upon an earlier demonstration of widefield optical sectioning through a flexible fiber bundle. The improved device features lateral and axial resolutions of 2.6 and 17 μm, respectively, a net frame rate of 9.5 Hz obtained by real-time image processing with a graphics processing unit (GPU) and significantly reduced motion artifacts obtained by the use of a double-shutter camera. We demonstrate the performance of our system with optically sectioned images and videos of a fluorescently labeled chorioallantoic membrane (CAM) in the developing G. gallus embryo. HiLo endomicroscopy is a candidate technique for low-cost, high-speed clinical optical biopsies. PMID:22463023

  12. Real time imaging and infrared background scene analysis using the Naval Postgraduate School infrared search and target designation (NPS-IRSTD) system

    NASA Astrophysics Data System (ADS)

    Bernier, Jean D.

    1991-09-01

    The imaging in real time of infrared background scenes with the Naval Postgraduate School Infrared Search and Target Designation (NPS-IRSTD) System was achieved through extensive software developments in protected mode assembly language on an Intel 80386 33 MHz computer. The new software processes the 512 by 480 pixel images directly in the extended memory area of the computer where the DT-2861 frame grabber memory buffers are mapped. Direct interfacing, through a JDR-PR10 prototype card, between the frame grabber and the host computer AT bus enables each load of the frame grabber memory buffers to be effected under software control. The protected mode assembly language program can refresh the display of a six degree pseudo-color sector in the scanner rotation within the two second period of the scanner. A study of the imaging properties of the NPS-IRSTD is presented with preliminary work on image analysis and contrast enhancement of infrared background scenes.

  13. Compact wearable dual-mode imaging system for real-time fluorescence image-guided surgery

    PubMed Central

    Zhu, Nan; Huang, Chih-Yu; Mondal, Suman; Gao, Shengkui; Huang, Chongyuan; Gruev, Viktor; Achilefu, Samuel; Liang, Rongguang

    2015-01-01

    Abstract. A wearable all-plastic imaging system for real-time fluorescence image-guided surgery is presented. The compact size of the system is especially suitable for applications in the operating room. The system consists of a dual-mode imaging system, see-through goggle, autofocusing, and auto-contrast tuning modules. The paper will discuss the system design and demonstrate the system performance. PMID:26358823

  14. Development of a stereofluoroscopy system

    NASA Technical Reports Server (NTRS)

    Rivers, D. B.

    1979-01-01

    A technique of 3-D video imaging, was developed for use on manned missions for observation and control of remote manipulators. An improved medical diagnostic fluoroscope with a stereo, real-time output was also developed. An explanation of how this system works, and recommendations for future work in this area are presented.

  15. Results of analysis of archive MSG data in the context of MCS prediction system development for economic decisions assistance - case studies

    NASA Astrophysics Data System (ADS)

    Szafranek, K.; Jakubiak, B.; Lech, R.; Tomczuk, M.

    2012-04-01

    PROZA (Operational decision-making based on atmospheric conditions) is the project co-financed by the European Union through the European Regional Development Fund. One of its tasks is to develop the operational forecast system, which is supposed to support different economies branches like forestry or fruit farming by reducing the risk of economic decisions with taking into consideration weather conditions. In the frame of this studies system of sudden convective phenomena (storms or tornados) prediction is going to be built. The main authors' purpose is to predict MCSs (Mezoscale Convective Systems) basing on MSG (Meteosat Second Generation) real-time data. Until now several tests were performed. The Meteosat satellite images in selected spectral channels collected for Central Europe Region for May and August 2010 were used to detect and track cloud systems related to MCSs. In proposed tracking method first the cloud objects are defined using the temperature threshold and next the selected cells are tracked using principle of overlapping position on consecutive images. The main benefit to use a temperature thresholding to define cells is its simplicity. During the tracking process the algorithm links the cells of the image at time t to the one of the following image at time t+dt that correspond to the same cloud system (Morel-Senesi algorithm). An automated detection and elimination of some instabilities presented in tracking algorithm was developed. The poster presents analysis of exemplary MCSs in the context of near real-time prediction system development.

  16. Adaptive Wiener filter super-resolution of color filter array images.

    PubMed

    Karch, Barry K; Hardie, Russell C

    2013-08-12

    Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.

  17. Object detection and imaging with acoustic time reversal mirrors

    NASA Astrophysics Data System (ADS)

    Fink, Mathias

    1993-11-01

    Focusing an acoustic wave on an object of unknown shape through an inhomogeneous medium of any geometrical shape is a challenge in underground detection. Optimal detection and imaging of objects needs the development of such focusing techniques. The use of a time reversal mirror (TRM) represents an original solution to this problem. It realizes in real time a focusing process matched to the object shape, to the geometries of the acoustic interfaces and to the geometries of the mirror. It is a self adaptative technique which compensates for any geometrical distortions of the mirror structure as well as for diffraction and refraction effects through the interfaces. Two real time 64 and 128 channel prototypes have been built in our laboratory and TRM experiments demonstrating the TRM performance through inhomogeneous solid and liquid media are presented. Applications to medical therapy (kidney stone detection and destruction) and to nondestructive testing of metallurgical samples of different geometries are described. Extension of this study to underground detection and imaging will be discussed.

  18. Non-interferometric quantitative phase imaging of yeast cells

    NASA Astrophysics Data System (ADS)

    Poola, Praveen K.; Pandiyan, Vimal Prabhu; John, Renu

    2015-12-01

    Real-time imaging of live cells is quite difficult without the addition of external contrast agents. Various methods for quantitative phase imaging of living cells have been proposed like digital holographic microscopy and diffraction phase microscopy. In this paper, we report theoretical and experimental results of quantitative phase imaging of live yeast cells with nanometric precision using transport of intensity equations (TIE). We demonstrate nanometric depth sensitivity in imaging live yeast cells using this technique. This technique being noninterferometric, does not need any coherent light sources and images can be captured through a regular bright-field microscope. This real-time imaging technique would deliver the depth or 3-D volume information of cells and is highly promising in real-time digital pathology applications, screening of pathogens and staging of diseases like malaria as it does not need any preprocessing of samples.

  19. Single-channel stereoscopic ophthalmology microscope based on TRD

    NASA Astrophysics Data System (ADS)

    Radfar, Edalat; Park, Jihoon; Lee, Sangyeob; Ha, Myungjin; Yu, Sungkon; Jang, Seulki; Jung, Byungjo

    2016-03-01

    A stereoscopic imaging modality was developed for the application of ophthalmology surgical microscopes. A previous study has already introduced a single-channel stereoscopic video imaging modality based on a transparent rotating deflector (SSVIM-TRD), in which two different view angles, image disparity, are generated by imaging through a transparent rotating deflector (TRD) mounted on a stepping motor and is placed in a lens system. In this case, the image disparity is a function of the refractive index and the rotation angle of TRD. Real-time single-channel stereoscopic ophthalmology microscope (SSOM) based on the TRD is improved by real-time controlling and programming, imaging speed, and illumination method. Image quality assessments were performed to investigate images quality and stability during the TRD operation. Results presented little significant difference in image quality in terms of stability of structural similarity (SSIM). A subjective analysis was performed with 15 blinded observers to evaluate the depth perception improvement and presented significant improvement in the depth perception capability. Along with all evaluation results, preliminary results of rabbit eye imaging presented that the SSOM could be utilized as an ophthalmic operating microscopes to overcome some of the limitations of conventional ones.

  20. MR imaging guidance for minimally invasive procedures

    NASA Astrophysics Data System (ADS)

    Wong, Terence Z.; Kettenbach, Joachim; Silverman, Stuart G.; Schwartz, Richard B.; Morrison, Paul R.; Kacher, Daniel F.; Jolesz, Ferenc A.

    1998-04-01

    Image guidance is one of the major challenges common to all minimally invasive procedures including biopsy, thermal ablation, endoscopy, and laparoscopy. This is essential for (1) identifying the target lesion, (2) planning the minimally invasive approach, and (3) monitoring the therapy as it progresses. MRI is an ideal imaging modality for this purpose, providing high soft tissue contrast and multiplanar imaging, capability with no ionizing radiation. An interventional/surgical MRI suite has been developed at Brigham and Women's Hospital which provides multiplanar imaging guidance during surgery, biopsy, and thermal ablation procedures. The 0.5T MRI system (General Electric Signa SP) features open vertical access, allowing intraoperative imaging to be performed. An integrated navigational system permits near real-time control of imaging planes, and provides interactive guidance for positioning various diagnostic and therapeutic probes. MR imaging can also be used to monitor cryotherapy as well as high temperature thermal ablation procedures sing RF, laser, microwave, or focused ultrasound. Design features of the interventional MRI system will be discussed, and techniques will be described for interactive image acquisition and tracking of interventional instruments. Applications for interactive and near-real-time imaging will be presented as well as examples of specific procedures performed using MRI guidance.

Top