Flightspeed Integral Image Analysis Toolkit
NASA Technical Reports Server (NTRS)
Thompson, David R.
2009-01-01
The Flightspeed Integral Image Analysis Toolkit (FIIAT) is a C library that provides image analysis functions in a single, portable package. It provides basic low-level filtering, texture analysis, and subwindow descriptor for applications dealing with image interpretation and object recognition. Designed with spaceflight in mind, it addresses: Ease of integration (minimal external dependencies) Fast, real-time operation using integer arithmetic where possible (useful for platforms lacking a dedicated floatingpoint processor) Written entirely in C (easily modified) Mostly static memory allocation 8-bit image data The basic goal of the FIIAT library is to compute meaningful numerical descriptors for images or rectangular image regions. These n-vectors can then be used directly for novelty detection or pattern recognition, or as a feature space for higher-level pattern recognition tasks. The library provides routines for leveraging training data to derive descriptors that are most useful for a specific data set. Its runtime algorithms exploit a structure known as the "integral image." This is a caching method that permits fast summation of values within rectangular regions of an image. This integral frame facilitates a wide range of fast image-processing functions. This toolkit has applicability to a wide range of autonomous image analysis tasks in the space-flight domain, including novelty detection, object and scene classification, target detection for autonomous instrument placement, and science analysis of geomorphology. It makes real-time texture and pattern recognition possible for platforms with severe computational restraints. The software provides an order of magnitude speed increase over alternative software libraries currently in use by the research community. FIIAT can commercially support intelligent video cameras used in intelligent surveillance. It is also useful for object recognition by robots or other autonomous vehicles
NASA Tech Briefs, November 2009
NASA Technical Reports Server (NTRS)
2009-01-01
Topics covered include: Cryogenic Chamber for Servo-Hydraulic Materials Testing; Apparatus Measures Thermal Conductance Through a Thin Sample from Cryogenic to Room Temperature; Rover Attitude and Pointing System Simulation Testbed; Desktop Application Program to Simulate Cargo-Air-Drop Tests; Multimodal Friction Ignition Tester; Small-Bolt Torque-Tension Tester; Integrated Spacesuit Audio System Enhances Speech Quality and Reduces Noise; Hardware Implementation of a Bilateral Subtraction Filter; Simple Optoelectronic Feedback in Microwave Oscillators; Small X-Band Oscillator Antennas; Free-Space Optical Interconnect Employing VCSEL Diodes; Discrete Fourier Transform Analysis in a Complex Vector Space; Miniature Scroll Pumps Fabricated by LIGA; Self-Assembling, Flexible, Pre-Ceramic Composite Preforms; Flight-speed Integral Image Analysis Toolkit; Work Coordination Engine; Multi-Mission Automated Task Invocation Subsystem; Autonomously Calibrating a Quadrupole Mass Spectrometer; Determining Spacecraft Reaction Wheel Friction Parameters; Composite Silica Aerogels Opacified with Titania; Multiplexed Colorimetric Solid-Phase Extraction; Detecting Airborne Mercury by Use of Polymer/Carbon Films; Lattice-Matched Semiconductor Layers on Single Crystalline Sapphire Substrate; Pressure-Energized Seal Rings to Better Withstand Flows; Rollerjaw Rock Crusher; Microwave Sterilization and Depyrogenation System; Quantifying Therapeutic and Diagnostic Efficacy in 2D Microvascular Images; NiF2/NaF:CaF2/Ca Solid-State High-Temperature Battery Cells; Critical Coupling Between Optical Fibers and WGM Resonators; Microwave Temperature Profiler Mounted in a Standard Airborne Research Canister; Alternative Determination of Density of the Titan Atmosphere; Solar Rejection Filter for Large Telescopes; Automated CFD for Generation of Airfoil Performance Tables; Progressive Classification Using Support Vector Machines; Active Learning with Irrelevant Examples; A Data Matrix Method for Improving the Quantification of Element Percentages of SEM/EDX Analysis; Deployable Shroud for the International X-Ray Observatory; Improved Model of a Mercury Ring Damper; Optoelectronic pH Meter: Further Details; X-38 Advanced Sublimator; and Solar Simulator Represents the Mars Surface Solar Environment.
Misaligned Image Integration With Local Linear Model.
Baba, Tatsuya; Matsuoka, Ryo; Shirai, Keiichiro; Okuda, Masahiro
2016-05-01
We present a new image integration technique for a flash and long-exposure image pair to capture a dark scene without incurring blurring or noisy artifacts. Most existing methods require well-aligned images for the integration, which is often a burdensome restriction in practical use. We address this issue by locally transferring the colors of the flash images using a small fraction of the corresponding pixels in the long-exposure images. We formulate the image integration as a convex optimization problem with the local linear model. The proposed method makes it possible to integrate the color of the long-exposure image with the detail of the flash image without causing any harmful effects to its contrast, where we do not need perfect alignment between the images by virtue of our new integration principle. We show that our method successfully outperforms the state of the art in the image integration and reference-based color transfer for challenging misaligned data sets.
Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan; Lu, Peiou
2016-01-01
The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively.
Multi-viewer tracking integral imaging system and its viewing zone analysis.
Park, Gilbae; Jung, Jae-Hyun; Hong, Keehoon; Kim, Yunhee; Kim, Young-Hoon; Min, Sung-Wook; Lee, Byoungho
2009-09-28
We propose a multi-viewer tracking integral imaging system for viewing angle and viewing zone improvement. In the tracking integral imaging system, the pickup angles in each elemental lens in the lens array are decided by the positions of viewers, which means the elemental image can be made for each viewer to provide wider viewing angle and larger viewing zone. Our tracking integral imaging system is implemented with an infrared camera and infrared light emitting diodes which can track the viewers' exact positions robustly. For multiple viewers to watch integrated three-dimensional images in the tracking integral imaging system, it is needed to formulate the relationship between the multiple viewers' positions and the elemental images. We analyzed the relationship and the conditions for the multiple viewers, and verified them by the implementation of two-viewer tracking integral imaging system.
Twin imaging phenomenon of integral imaging.
Hu, Juanmei; Lou, Yimin; Wu, Fengmin; Chen, Aixi
2018-05-14
The imaging principles and phenomena of integral imaging technique have been studied in detail using geometrical optics, wave optics, or light filed theory. However, most of the conclusions are only suit for the integral imaging systems using diffused illumination. In this work, a kind of twin imaging phenomenon and mechanism has been observed in a non-diffused illumination reflective integral imaging system. Interactive twin images including a real and a virtual 3D image of one object can be activated in the system. The imaging phenomenon is similar to the conjugate imaging effect of hologram, but it base on the refraction and reflection instead of diffraction. The imaging characteristics and mechanisms different from traditional integral imaging are deduced analytically. Thin film integral imaging systems with 80μm thickness have also been made to verify the imaging phenomenon. Vivid lighting interactive twin 3D images have been realized using a light-emitting diode (LED) light source. When the LED is moving, the twin 3D images are moving synchronously. This interesting phenomenon shows a good application prospect in interactive 3D display, argument reality, and security authentication.
Sun, Yajuan; Yu, Hongjuan; Ma, Jingquan
2016-01-01
Objective The aim of our study was to evaluate the role of 18F-FDG PET/CT integrated imaging in differentiating malignant from benign pleural effusion. Methods A total of 176 patients with pleural effusion who underwent 18F-FDG PET/CT examination to differentiate malignancy from benignancy were retrospectively researched. The images of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were visually analyzed. The suspected malignant effusion was characterized by the presence of nodular or irregular pleural thickening on CT imaging. Whereas on PET imaging, pleural 18F-FDG uptake higher than mediastinal activity was interpreted as malignant effusion. Images of 18F-FDG PET/CT integrated imaging were interpreted by combining the morphologic feature of pleura on CT imaging with the degree and form of pleural 18F-FDG uptake on PET imaging. Results One hundred and eight patients had malignant effusion, including 86 with pleural metastasis and 22 with pleural mesothelioma, whereas 68 patients had benign effusion. The sensitivities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging in detecting malignant effusion were 75.0%, 91.7% and 93.5%, respectively, which were 69.8%, 91.9% and 93.0% in distinguishing metastatic effusion. The sensitivity of 18F-FDG PET/CT integrated imaging in detecting malignant effusion was higher than that of CT imaging (p = 0.000). For metastatic effusion, 18F-FDG PET imaging had higher sensitivity (p = 0.000) and better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with CT imaging (Kappa = 0.917 and Kappa = 0.295, respectively). The specificities of CT imaging, 18F-FDG PET imaging and 18F-FDG PET/CT integrated imaging were 94.1%, 63.2% and 92.6% in detecting benign effusion. The specificities of CT imaging and 18F-FDG PET/CT integrated imaging were higher than that of 18F-FDG PET imaging (p = 0.000 and p = 0.000, respectively), and CT imaging had better diagnostic consistency with 18F-FDG PET/CT integrated imaging compared with 18F-FDG PET imaging (Kappa = 0.881 and Kappa = 0.240, respectively). Conclusion 18F-FDG PET/CT integrated imaging is a more reliable modality in distinguishing malignant from benign pleural effusion than 18F-FDG PET imaging and CT imaging alone. For image interpretation of 18F-FDG PET/CT integrated imaging, the PET and CT portions play a major diagnostic role in identifying metastatic effusion and benign effusion, respectively. PMID:27560933
Flight Speeds among Bird Species: Allometric and Phylogenetic Effects
Alerstam, Thomas; Rosén, Mikael; Bäckman, Johan; Ericson, Per G. P; Hellgren, Olof
2007-01-01
Flight speed is expected to increase with mass and wing loading among flying animals and aircraft for fundamental aerodynamic reasons. Assuming geometrical and dynamical similarity, cruising flight speed is predicted to vary as (body mass)1/6 and (wing loading)1/2 among bird species. To test these scaling rules and the general importance of mass and wing loading for bird flight speeds, we used tracking radar to measure flapping flight speeds of individuals or flocks of migrating birds visually identified to species as well as their altitude and winds at the altitudes where the birds were flying. Equivalent airspeeds (airspeeds corrected to sea level air density, U e) of 138 species, ranging 0.01–10 kg in mass, were analysed in relation to biometry and phylogeny. Scaling exponents in relation to mass and wing loading were significantly smaller than predicted (about 0.12 and 0.32, respectively, with similar results for analyses based on species and independent phylogenetic contrasts). These low scaling exponents may be the result of evolutionary restrictions on bird flight-speed range, counteracting too slow flight speeds among species with low wing loading and too fast speeds among species with high wing loading. This compression of speed range is partly attained through geometric differences, with aspect ratio showing a positive relationship with body mass and wing loading, but additional factors are required to fully explain the small scaling exponent of U e in relation to wing loading. Furthermore, mass and wing loading accounted for only a limited proportion of the variation in U e. Phylogeny was a powerful factor, in combination with wing loading, to account for the variation in U e. These results demonstrate that functional flight adaptations and constraints associated with different evolutionary lineages have an important influence on cruising flapping flight speed that goes beyond the general aerodynamic scaling effects of mass and wing loading. PMID:17645390
Yi, Faliu; Jeoung, Yousun; Moon, Inkyu
2017-05-20
In recent years, many studies have focused on authentication of two-dimensional (2D) images using double random phase encryption techniques. However, there has been little research on three-dimensional (3D) imaging systems, such as integral imaging, for 3D image authentication. We propose a 3D image authentication scheme based on a double random phase integral imaging method. All of the 2D elemental images captured through integral imaging are encrypted with a double random phase encoding algorithm and only partial phase information is reserved. All the amplitude and other miscellaneous phase information in the encrypted elemental images is discarded. Nevertheless, we demonstrate that 3D images from integral imaging can be authenticated at different depths using a nonlinear correlation method. The proposed 3D image authentication algorithm can provide enhanced information security because the decrypted 2D elemental images from the sparse phase cannot be easily observed by the naked eye. Additionally, using sparse phase images without any amplitude information can greatly reduce data storage costs and aid in image compression and data transmission.
Display of travelling 3D scenes from single integral-imaging capture
NASA Astrophysics Data System (ADS)
Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro
2016-06-01
Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Time-of-flight depth image enhancement using variable integration time
NASA Astrophysics Data System (ADS)
Kim, Sun Kwon; Choi, Ouk; Kang, Byongmin; Kim, James Dokyoon; Kim, Chang-Yeong
2013-03-01
Time-of-Flight (ToF) cameras are used for a variety of applications because it delivers depth information at a high frame rate. These cameras, however, suffer from challenging problems such as noise and motion artifacts. To increase signal-to-noise ratio (SNR), the camera should calculate a distance based on a large amount of infra-red light, which needs to be integrated over a long time. On the other hand, the integration time should be short enough to suppress motion artifacts. We propose a ToF depth imaging method to combine advantages of short and long integration times exploiting an imaging fusion scheme proposed for color imaging. To calibrate depth differences due to the change of integration times, a depth transfer function is estimated by analyzing the joint histogram of depths in the two images of different integration times. The depth images are then transformed into wavelet domains and fused into a depth image with suppressed noise and low motion artifacts. To evaluate the proposed method, we captured a moving bar of a metronome with different integration times. The experiment shows the proposed method could effectively remove the motion artifacts while preserving high SNR comparable to the depth images acquired during long integration time.
2015-11-05
AFRL-AFOSR-VA-TR-2015-0359 Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast Imaging Viktor Gruev...To) 02/15/2011 - 08/15/2015 4. TITLE AND SUBTITLE Integrated Spectral Low Noise Image Sensor with Nanowire Polarization Filters for Low Contrast...investigate alternative spectral imaging architectures based on my previous experience in this research area. I will develop nanowire polarization
Dynamic integral imaging technology for 3D applications (Conference Presentation)
NASA Astrophysics Data System (ADS)
Huang, Yi-Pai; Javidi, Bahram; Martínez-Corral, Manuel; Shieh, Han-Ping D.; Jen, Tai-Hsiang; Hsieh, Po-Yuan; Hassanfiroozi, Amir
2017-05-01
Depth and resolution are always the trade-off in integral imaging technology. With the dynamic adjustable devices, the two factors of integral imaging can be fully compensated with time-multiplexed addressing. Those dynamic devices can be mechanical or electrical driven. In this presentation, we will mainly focused on discussing various Liquid Crystal devices which can change the focal length, scan and shift the image position, or switched in between 2D/3D mode. By using the Liquid Crystal devices, dynamic integral imaging have been successfully applied on 3D Display, capturing, and bio-imaging applications.
Floating aerial 3D display based on the freeform-mirror and the improved integral imaging system
NASA Astrophysics Data System (ADS)
Yu, Xunbo; Sang, Xinzhu; Gao, Xin; Yang, Shenwu; Liu, Boyang; Chen, Duo; Yan, Binbin; Yu, Chongxiu
2018-09-01
A floating aerial three-dimensional (3D) display based on the freeform-mirror and the improved integral imaging system is demonstrated. In the traditional integral imaging (II), the distortion originating from lens aberration warps elemental images and degrades the visual effect severely. To correct the distortion of the observed pixels and to improve the image quality, a directional diffuser screen (DDS) is introduced. However, the improved integral imaging system can hardly present realistic images with the large off-screen depth, which limits floating aerial visual experience. To display the 3D image in the free space, the off-axis reflection system with the freeform-mirror is designed. By combining the improved II and the designed freeform optical element, the floating aerial 3D image is presented.
Integral imaging with multiple image planes using a uniaxial crystal plate.
Park, Jae-Hyeung; Jung, Sungyong; Choi, Heejin; Lee, Byoungho
2003-08-11
Integral imaging has been attracting much attention recently for its several advantages such as full parallax, continuous view-points, and real-time full-color operation. However, the thickness of the displayed three-dimensional image is limited to relatively small value due to the degradation of the image resolution. In this paper, we propose a method to provide observers with enhanced perception of the depth without severe resolution degradation by the use of the birefringence of a uniaxial crystal plate. The proposed integral imaging system can display images integrated around three central depth planes by dynamically altering the polarization and controlling both elemental images and dynamic slit array mask accordingly. We explain the principle of the proposed method and verify it experimentally.
Extended depth of field integral imaging using multi-focus fusion
NASA Astrophysics Data System (ADS)
Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua
2018-03-01
In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.
Using image mapping towards biomedical and biological data sharing
2013-01-01
Image-based data integration in eHealth and life sciences is typically concerned with the method used for anatomical space mapping, needed to retrieve, compare and analyse large volumes of biomedical data. In mapping one image onto another image, a mechanism is used to match and find the corresponding spatial regions which have the same meaning between the source and the matching image. Image-based data integration is useful for integrating data of various information structures. Here we discuss a broad range of issues related to data integration of various information structures, review exemplary work on image representation and mapping, and discuss the challenges that these techniques may bring. PMID:24059352
Baur, Heidi; Gatterer, Hannes; Hotter, Barbara; Kopp, Martin
2017-06-01
[Purpose] The aim of this study was to examine the influence of Structural Integration and Fascial Fitness, a new form of physical exercise, on body image and the perception of back pain. [Subjects and Methods] In total, 33 participants with non-specific back pain were split into two groups and performed three sessions of Structural Integration or Fascial Fitness within a 3-week period. Before and after the interventions, perception of back pain and body image were evaluated using standardized questionnaires. [Results] Structural Integration significantly decreased non-specified back pain and improved both "negative body image" and "vital body dynamics". Fascial Fitness led to a significant improvement on the "negative body image" subscale. Benefits of Structural Integration did not significantly vary in magnitude from those for fascial fitness. [Conclusion] Both Structural Integration and Fascial Fitness can lead to a more positive body image after only three sessions. Moreover, the therapeutic technique of Structural Integration can reduce back pain.
Implementation of Enterprise Imaging Strategy at a Chinese Tertiary Hospital.
Li, Shanshan; Liu, Yao; Yuan, Yifang; Li, Jia; Wei, Lan; Wang, Yuelong; Fei, Xiaolu
2018-01-04
Medical images have become increasingly important in clinical practice and medical research, and the need to manage images at the hospital level has become urgent in China. To unify patient identification in examinations from different medical specialties, increase convenient access to medical images under authentication, and make medical images suitable for further artificial intelligence investigations, we implemented an enterprise imaging strategy by adopting an image integration platform as the main tool at Xuanwu Hospital. Workflow re-engineering and business system transformation was also performed to ensure the quality and content of the imaging data. More than 54 million medical images and approximately 1 million medical reports were integrated, and uniform patient identification, images, and report integration were made available to the medical staff and were accessible via a mobile application, which were achieved by implementing the enterprise imaging strategy. However, to integrate all medical images of different specialties at a hospital and ensure that the images and reports are qualified for data mining, some further policy and management measures are still needed.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
IHE cross-enterprise document sharing for imaging: design challenges
NASA Astrophysics Data System (ADS)
Noumeir, Rita
2006-03-01
Integrating the Healthcare Enterprise (IHE) has recently published a new integration profile for sharing documents between multiple enterprises. The Cross-Enterprise Document Sharing Integration Profile (XDS) lays the basic framework for deploying regional and national Electronic Health Record (EHR). This profile proposes an architecture based on a central Registry that holds metadata information describing published Documents residing in one or multiple Documents Repositories. As medical images constitute important information of the patient health record, it is logical to extend the XDS Integration Profile to include images. However, including images in the EHR presents many challenges. The complete image set is very large; it is useful for radiologists and other specialists such as surgeons and orthopedists. The imaging report, on the other hand, is widely needed and its broad accessibility is vital for achieving optimal patient care. Moreover, a subset of relevant images may also be of wide interest along with the report. Therefore, IHE recently published a new integration profile for sharing images and imaging reports between multiple enterprises. This new profile, the Cross-Enterprise Document Sharing for Imaging (XDS-I), is based on the XDS architecture. The XDS-I integration solution that is published as part of the IHE Technical Framework is the result of an extensive investigation effort of several design solutions. This paper presents and discusses the design challenges and the rationales behind the design decisions of the IHE XDS-I Integration Profile, for a better understanding and appreciation of the final published solution.
An integrated compact airborne multispectral imaging system using embedded computer
NASA Astrophysics Data System (ADS)
Zhang, Yuedong; Wang, Li; Zhang, Xuguo
2015-08-01
An integrated compact airborne multispectral imaging system using embedded computer based control system was developed for small aircraft multispectral imaging application. The multispectral imaging system integrates CMOS camera, filter wheel with eight filters, two-axis stabilized platform, miniature POS (position and orientation system) and embedded computer. The embedded computer has excellent universality and expansibility, and has advantages in volume and weight for airborne platform, so it can meet the requirements of control system of the integrated airborne multispectral imaging system. The embedded computer controls the camera parameters setting, filter wheel and stabilized platform working, image and POS data acquisition, and stores the image and data. The airborne multispectral imaging system can connect peripheral device use the ports of the embedded computer, so the system operation and the stored image data management are easy. This airborne multispectral imaging system has advantages of small volume, multi-function, and good expansibility. The imaging experiment results show that this system has potential for multispectral remote sensing in applications such as resource investigation and environmental monitoring.
Kim, Joowhan; Min, Sung-Wook; Lee, Byoungho
2007-10-01
Integral floating display is a recently proposed three-dimensional (3D) display method which provides a dynamic 3D image in the vicinity to an observer. It has a viewing window only through which correct 3D images can be observed. However, the positional difference between the viewing window and the floating image causes limited viewing zone in integral floating system. In this paper, we provide the principle and experimental results of the location adjustment of the viewing window of the integral floating display system by modifying the elemental image region for integral imaging. We explain the characteristics of the viewing window and propose how to move the viewing window to maximize the viewing zone.
Advances in three-dimensional integral imaging: sensing, display, and applications [Invited].
Xiao, Xiao; Javidi, Bahram; Martinez-Corral, Manuel; Stern, Adrian
2013-02-01
Three-dimensional (3D) sensing and imaging technologies have been extensively researched for many applications in the fields of entertainment, medicine, robotics, manufacturing, industrial inspection, security, surveillance, and defense due to their diverse and significant benefits. Integral imaging is a passive multiperspective imaging technique, which records multiple two-dimensional images of a scene from different perspectives. Unlike holography, it can capture a scene such as outdoor events with incoherent or ambient light. Integral imaging can display a true 3D color image with full parallax and continuous viewing angles by incoherent light; thus it does not suffer from speckle degradation. Because of its unique properties, integral imaging has been revived over the past decade or so as a promising approach for massive 3D commercialization. A series of key articles on this topic have appeared in the OSA journals, including Applied Optics. Thus, it is fitting that this Commemorative Review presents an overview of literature on physical principles and applications of integral imaging. Several data capture configurations, reconstruction, and display methods are overviewed. In addition, applications including 3D underwater imaging, 3D imaging in photon-starved environments, 3D tracking of occluded objects, 3D optical microscopy, and 3D polarimetric imaging are reviewed.
A 128 x 128 CMOS Active Pixel Image Sensor for Highly Integrated Imaging Systems
NASA Technical Reports Server (NTRS)
Mendis, Sunetra K.; Kemeny, Sabrina E.; Fossum, Eric R.
1993-01-01
A new CMOS-based image sensor that is intrinsically compatible with on-chip CMOS circuitry is reported. The new CMOS active pixel image sensor achieves low noise, high sensitivity, X-Y addressability, and has simple timing requirements. The image sensor was fabricated using a 2 micrometer p-well CMOS process, and consists of a 128 x 128 array of 40 micrometer x 40 micrometer pixels. The CMOS image sensor technology enables highly integrated smart image sensors, and makes the design, incorporation and fabrication of such sensors widely accessible to the integrated circuit community.
Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A
2016-06-01
To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Securing Digital Images Integrity using Artificial Neural Networks
NASA Astrophysics Data System (ADS)
Hajji, Tarik; Itahriouan, Zakaria; Ouazzani Jamil, Mohammed
2018-05-01
Digital image signature is a technique used to protect the image integrity. The application of this technique can serve several areas of imaging applied to smart cities. The objective of this work is to propose two methods to protect digital image integrity. We present a description of two approaches using artificial neural networks (ANN) to digitally sign an image. The first one is “Direct Signature without learning” and the second is “Direct Signature with learning”. This paper presents the theory of proposed approaches and an experimental study to test their effectiveness.
Hahn, Paul; Migacz, Justin; O'Donnell, Rachelle; Day, Shelley; Lee, Annie; Lin, Phoebe; Vann, Robin; Kuo, Anthony; Fekrat, Sharon; Mruthyunjaya, Prithvi; Postel, Eric A; Izatt, Joseph A; Toth, Cynthia A
2013-01-01
The authors have recently developed a high-resolution microscope-integrated spectral domain optical coherence tomography (MIOCT) device designed to enable OCT acquisition simultaneous with surgical maneuvers. The purpose of this report is to describe translation of this device from preclinical testing into human intraoperative imaging. Before human imaging, surgical conditions were fully simulated for extensive preclinical MIOCT evaluation in a custom model eye system. Microscope-integrated spectral domain OCT images were then acquired in normal human volunteers and during vitreoretinal surgery in patients who consented to participate in a prospective institutional review board-approved study. Microscope-integrated spectral domain OCT images were obtained before and at pauses in surgical maneuvers and were compared based on predetermined diagnostic criteria to images obtained with a high-resolution spectral domain research handheld OCT system (HHOCT; Bioptigen, Inc) at the same time point. Cohorts of five consecutive patients were imaged. Successful end points were predefined, including ≥80% correlation in identification of pathology between MIOCT and HHOCT in ≥80% of the patients. Microscope-integrated spectral domain OCT was favorably evaluated by study surgeons and scrub nurses, all of whom responded that they would consider participating in human intraoperative imaging trials. The preclinical evaluation identified significant improvements that were made before MIOCT use during human surgery. The MIOCT transition into clinical human research was smooth. Microscope-integrated spectral domain OCT imaging in normal human volunteers demonstrated high resolution comparable to tabletop scanners. In the operating room, after an initial learning curve, surgeons successfully acquired human macular MIOCT images before and after surgical maneuvers. Microscope-integrated spectral domain OCT imaging confirmed preoperative diagnoses, such as full-thickness macular hole and vitreomacular traction, and demonstrated postsurgical changes in retinal morphology. Two cohorts of five patients were imaged. In the second cohort, the predefined end points were exceeded with ≥80% correlation between microscope-mounted OCT and HHOCT imaging in 100% of the patients. This report describes high-resolution MIOCT imaging using the prototype device in human eyes during vitreoretinal surgery, with successful achievement of predefined end points for imaging. Further refinements and investigations will be directed toward fully integrating MIOCT with vitreoretinal and other ocular surgery to image surgical maneuvers in real time.
Chondronikola, Maria; Sidossis, Labros S.; Richardson, Lisa M.; Temple, Jeff R.; van den Berg, Patricia A.; Herndon, David N.; Meyer, Walter J.
2012-01-01
Objective Burn injury deformities and obesity have been associated with social integration difficulty and body image dissatisfaction. However, the combined effects of obesity and burn injury in social integration difficulty and body image dissatisfaction are unknown. Methods Adolescent and young adults burn injury survivors were categorized as normal weight (n=47) or overweight and obese (n=21). Burn-related and anthropometric information was obtained from patients' medical records, while validated questionnaires were used to assess the main outcomes and possible confounders. Analysis of covariance and multiple linear regressions were performed to evaluate the objectives of this study. Results Obese and overweight burn injury survivors did not experience increased body image dissatisfaction (12 ± 4.3 vs 13.1 ± 4.4, p = 0.57) or social integration difficulty (17.5 ± 6.9 vs 15.5 ± 5.7, p=0.16) compared to normal weight burn injury survivors. Weight status was not a significant predictor of social integration difficulty or body image dissatisfaction (p=0.19 and p=0.24, respectively). However, mobility limitations predicted greater social integration difficulty (p=0.005) and body image dissatisfaction (p<0.001), while higher weight status at burn was a borderline significant predictor of body image dissatisfaction (p=0.05). Conclusions Obese and overweight adolescents and young adults, who sustained a major burn injury as children, do not experience greater social integration difficulty and body image dissatisfaction compared to normal weight burn injury survivors. Mobility limitations and higher weight status at burn are likely more important factors affecting the long-term social integration difficulty and body image dissatisfaction of these young people. PMID:23292577
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
1999-07-01
The US Department of Veterans Affairs (VA) is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and the clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low-cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. Two approaches are used to acquire and handle imags within the radiology department. Some sties have a commercial Picture Archiving and Communications System (PACS) interfaced to the VistA HIS, while other sites use the direct image acquisition and integrated diagnostic reading capabilities of VistA itself. A small set of DICOM services have been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back. The VistA DICOM capabilities are now used to interface seven different commercial PACS products and over twenty different radiology modalities. The communications capabilities of DICOM and the VA wide area network are begin used to support reading of radiology images form remote sites. DICOM has been the cornerstone in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system component from commercial and non- commercial sources to work together to provide functional cost-effective solutions. As DICOM expands to non-radiology devices, integration must occur with the specialty information subsystems that handle orders and reports, their associated DICOM image capture systems, and the computer- based patient record. The mode and concepts of the DICOM standard can be extended to these other areas, but some adjustments may be required.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Kim, Yunhee; Choi, Heejin; Kim, Joohwan; Cho, Seong-Woo; Kim, Youngmin; Park, Gilbae; Lee, Byoungho
2007-06-20
A depth-enhanced three-dimensional integral imaging system with electrically variable image planes is proposed. For implementing the variable image planes, polymer-dispersed liquid-crystal (PDLC) films and a projector are adopted as a new display system in the integral imaging. Since the transparencies of PDLC films are electrically controllable, we can make each film diffuse the projected light successively with a different depth from the lens array. As a result, the proposed method enables control of the location of image planes electrically and enhances the depth. The principle of the proposed method is described, and experimental results are also presented.
Edge Preserved Speckle Noise Reduction Using Integrated Fuzzy Filters
Dewal, M. L.; Rohit, Manoj Kumar
2014-01-01
Echocardiographic images are inherent with speckle noise which makes visual reading and analysis quite difficult. The multiplicative speckle noise masks finer details, necessary for diagnosis of abnormalities. A novel speckle reduction technique based on integration of geometric, wiener, and fuzzy filters is proposed and analyzed in this paper. The denoising applications of fuzzy filters are studied and analyzed along with 26 denoising techniques. It is observed that geometric filter retains noise and, to address this issue, wiener filter is embedded into the geometric filter during iteration process. The performance of geometric-wiener filter is further enhanced using fuzzy filters and the proposed despeckling techniques are called integrated fuzzy filters. Fuzzy filters based on moving average and median value are employed in the integrated fuzzy filters. The performances of integrated fuzzy filters are tested on echocardiographic images and synthetic images in terms of image quality metrics. It is observed that the performance parameters are highest in case of integrated fuzzy filters in comparison to fuzzy and geometric-fuzzy filters. The clinical validation reveals that the output images obtained using geometric-wiener, integrated fuzzy, nonlocal means, and details preserving anisotropic diffusion filters are acceptable. The necessary finer details are retained in the denoised echocardiographic images. PMID:27437499
Stereoscopic Integrated Imaging Goggles for Multimodal Intraoperative Image Guidance
Mela, Christopher A.; Patterson, Carrie; Thompson, William K.; Papay, Francis; Liu, Yang
2015-01-01
We have developed novel stereoscopic wearable multimodal intraoperative imaging and display systems entitled Integrated Imaging Goggles for guiding surgeries. The prototype systems offer real time stereoscopic fluorescence imaging and color reflectance imaging capacity, along with in vivo handheld microscopy and ultrasound imaging. With the Integrated Imaging Goggle, both wide-field fluorescence imaging and in vivo microscopy are provided. The real time ultrasound images can also be presented in the goggle display. Furthermore, real time goggle-to-goggle stereoscopic video sharing is demonstrated, which can greatly facilitate telemedicine. In this paper, the prototype systems are described, characterized and tested in surgeries in biological tissues ex vivo. We have found that the system can detect fluorescent targets with as low as 60 nM indocyanine green and can resolve structures down to 0.25 mm with large FOV stereoscopic imaging. The system has successfully guided simulated cancer surgeries in chicken. The Integrated Imaging Goggle is novel in 4 aspects: it is (a) the first wearable stereoscopic wide-field intraoperative fluorescence imaging and display system, (b) the first wearable system offering both large FOV and microscopic imaging simultaneously, (c) the first wearable system that offers both ultrasound imaging and fluorescence imaging capacities, and (d) the first demonstration of goggle-to-goggle communication to share stereoscopic views for medical guidance. PMID:26529249
Jacob, A L; Regazzoni, P; Bilecen, D; Rasmus, M; Huegli, R W; Messmer, P
2007-01-01
Technology integration is an enabling technological prerequisite to achieve a major breakthrough in sophisticated intra-operative imaging, navigation and robotics in minimally invasive and/or emergency diagnosis and therapy. Without a high degree of integration and reliability comparable to that achieved in the aircraft industry image guidance in its different facets will not ultimately succeed. As of today technology integration in the field of image-guidance is close to nonexistent. Technology integration requires inter-departmental integration of human and financial resources and of medical processes in a dialectic way. This expanded techno-socio-economic integration has profound consequences for the administration and working conditions in hospitals. At the university hospital of Basel, Switzerland, a multimodality multifunction sterile suite was put into operation after a substantial pre-run. We report the lessons learned during our venture into the world of medical technology integration and describe new possibilities for similar integration projects in the future.
3D augmented reality with integral imaging display
NASA Astrophysics Data System (ADS)
Shen, Xin; Hua, Hong; Javidi, Bahram
2016-06-01
In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.
FIMic: design for ultimate 3D-integral microscopy of in-vivo biological samples
Scrofani, G.; Sola-Pikabea, J.; Llavador, A.; Sanchez-Ortiga, E.; Barreiro, J. C.; Saavedra, G.; Garcia-Sucerquia, J.; Martínez-Corral, M.
2017-01-01
In this work, Fourier integral microscope (FIMic), an ultimate design of 3D-integral microscopy, is presented. By placing a multiplexing microlens array at the aperture stop of the microscope objective of the host microscope, FIMic shows extended depth of field and enhanced lateral resolution in comparison with regular integral microscopy. As FIMic directly produces a set of orthographic views of the 3D-micrometer-sized sample, it is suitable for real-time imaging. Following regular integral-imaging reconstruction algorithms, a 2.75-fold enhanced depth of field and 2-time better spatial resolution in comparison with conventional integral microscopy is reported. Our claims are supported by theoretical analysis and experimental images of a resolution test target, cotton fibers, and in-vivo 3D-imaging of biological specimens. PMID:29359107
NASA Astrophysics Data System (ADS)
Kuzmak, Peter M.; Dayhoff, Ruth E.
1998-07-01
The U.S. Department of Veterans Affairs is integrating imaging into the healthcare enterprise using the Digital Imaging and Communication in Medicine (DICOM) standard protocols. Image management is directly integrated into the VistA Hospital Information System (HIS) software and clinical database. Radiology images are acquired via DICOM, and are stored directly in the HIS database. Images can be displayed on low- cost clinician's workstations throughout the medical center. High-resolution diagnostic quality multi-monitor VistA workstations with specialized viewing software can be used for reading radiology images. DICOM has played critical roles in the ability to integrate imaging functionality into the Healthcare Enterprise. Because of its openness, it allows the integration of system components from commercial and non- commercial sources to work together to provide functional cost-effective solutions (see Figure 1). Two approaches are used to acquire and handle images within the radiology department. At some VA Medical Centers, DICOM is used to interface a commercial Picture Archiving and Communications System (PACS) to the VistA HIS. At other medical centers, DICOM is used to interface the image producing modalities directly to the image acquisition and display capabilities of VistA itself. Both of these approaches use a small set of DICOM services that has been implemented by VistA to allow patient and study text data to be transmitted to image producing modalities and the commercial PACS, and to enable images and study data to be transferred back.
Electronic noise in CT detectors: Impact on image noise and artifacts.
Duan, Xinhui; Wang, Jia; Leng, Shuai; Schmidt, Bernhard; Allmendinger, Thomas; Grant, Katharine; Flohr, Thomas; McCollough, Cynthia H
2013-10-01
The objective of our study was to evaluate in phantoms the differences in CT image noise and artifact level between two types of commercial CT detectors: one with distributed electronics (conventional) and one with integrated electronics intended to decrease system electronic noise. Cylindric water phantoms of 20, 30, and 40 cm in diameter were scanned using two CT scanners, one equipped with integrated detector electronics and one with distributed detector electronics. All other scanning parameters were identical. Scans were acquired at four tube potentials and 10 tube currents. Semianthropomorphic phantoms were scanned to mimic the shoulder and abdominal regions. Images of two patients were also selected to show the clinical values of the integrated detector. Reduction of image noise with the integrated detector depended on phantom size, tube potential, and tube current. Scans that had low detected signal had the greatest reductions in noise, up to 40% for a 30-cm phantom scanned using 80 kV. This noise reduction translated into up to 50% in dose reduction to achieve equivalent image noise. Streak artifacts through regions of high attenuation were reduced by up to 45% on scans obtained using the integrated detector. Patient images also showed superior image quality for the integrated detector. For the same applied radiation level, the use of integrated electronics in a CT detector showed a substantially reduced level of electronic noise, resulting in reductions in image noise and artifacts, compared with detectors having distributed electronics.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
NASA Astrophysics Data System (ADS)
Dupuy, Pascal; Harter, Jean
1995-09-01
Iris is a modular infrared thermal image developed by SAGEM since 1988, based on a 288 by 4 IRCCD detector. The first section of the presentation gives a description of the different modules of the IRIS thermal imager and their evolution in recent years. The second section covers the description of the major evolution, namely the integrated detector cooler assembly (IDCA), using a SOFRADIR 288 by 4 detector and a SAGEM microcooler, now integrated in the IRIS thermal imagers. The third section gives the description of two functions integrated in the IRIS thermal imager: (1) image enhancement, using a digital convolution filter, and (2) automatic hot points detection and tracking, offering an assistance to surveillance and automatic detection. The last section presents several programs for navy, air forces, and land applications for which IRIS has already been selected and achieved.
NASA Astrophysics Data System (ADS)
Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun
2017-11-01
An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.
Strict integrity control of biomedical images
NASA Astrophysics Data System (ADS)
Coatrieux, Gouenou; Maitre, Henri; Sankur, Bulent
2001-08-01
The control of the integrity and authentication of medical images is becoming ever more important within the Medical Information Systems (MIS). The intra- and interhospital exchange of images, such as in the PACS (Picture Archiving and Communication Systems), and the ease of copying, manipulation and distribution of images have brought forth the security aspects. In this paper we focus on the role of watermarking for MIS security and address the problem of integrity control of medical images. We discuss alternative schemes to extract verification signatures and compare their tamper detection performance.
Medical image computing for computer-supported diagnostics and therapy. Advances and perspectives.
Handels, H; Ehrhardt, J
2009-01-01
Medical image computing has become one of the most challenging fields in medical informatics. In image-based diagnostics of the future software assistance will become more and more important, and image analysis systems integrating advanced image computing methods are needed to extract quantitative image parameters to characterize the state and changes of image structures of interest (e.g. tumors, organs, vessels, bones etc.) in a reproducible and objective way. Furthermore, in the field of software-assisted and navigated surgery medical image computing methods play a key role and have opened up new perspectives for patient treatment. However, further developments are needed to increase the grade of automation, accuracy, reproducibility and robustness. Moreover, the systems developed have to be integrated into the clinical workflow. For the development of advanced image computing systems methods of different scientific fields have to be adapted and used in combination. The principal methodologies in medical image computing are the following: image segmentation, image registration, image analysis for quantification and computer assisted image interpretation, modeling and simulation as well as visualization and virtual reality. Especially, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients and will gain importance in diagnostic and therapy of the future. From a methodical point of view the authors identify the following future trends and perspectives in medical image computing: development of optimized application-specific systems and integration into the clinical workflow, enhanced computational models for image analysis and virtual reality training systems, integration of different image computing methods, further integration of multimodal image data and biosignals and advanced methods for 4D medical image computing. The development of image analysis systems for diagnostic support or operation planning is a complex interdisciplinary process. Image computing methods enable new insights into the patient's image data and have the future potential to improve medical diagnostics and patient treatment.
NASA Astrophysics Data System (ADS)
Tang, Yunwei; Atkinson, Peter M.; Zhang, Jingxiong
2015-03-01
A cross-scale data integration method was developed and tested based on the theory of geostatistics and multiple-point geostatistics (MPG). The goal was to downscale remotely sensed images while retaining spatial structure by integrating images at different spatial resolutions. During the process of downscaling, a rich spatial correlation model in the form of a training image was incorporated to facilitate reproduction of similar local patterns in the simulated images. Area-to-point cokriging (ATPCK) was used as locally varying mean (LVM) (i.e., soft data) to deal with the change of support problem (COSP) for cross-scale integration, which MPG cannot achieve alone. Several pairs of spectral bands of remotely sensed images were tested for integration within different cross-scale case studies. The experiment shows that MPG can restore the spatial structure of the image at a fine spatial resolution given the training image and conditioning data. The super-resolution image can be predicted using the proposed method, which cannot be realised using most data integration methods. The results show that ATPCK-MPG approach can achieve greater accuracy than methods which do not account for the change of support issue.
Integration of medical imaging into a multi-institutional hospital information system structure.
Dayhoff, R E
1995-01-01
The Department of Veterans Affairs (VA) is providing integrated text and image data to its clinical users at its Washington and Baltimore medical centers and, soon, at nine other medical centers. The DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including cardiology, gastroenterology, pathology, dermatology, surgery, radiology, podiatry, dentistry, and emergency medicine. These images, which include color and gray scale images, and electrocardiogram waveforms, are displayed on workstations located throughout the medical centers. Integration of clinical images with the VA's electronic mail system allows transfer of data from one medical center to another. The ability to incorporate transmitted text and image data into on-line patient records at the collaborating sites is an important aspect of professional consultation. In order to achieve the maximum benefits from an integrated patient record system, a critical mass of information must be available for clinicians. When there is also seamless support for administration, it becomes possible to re-engineer the processes involved in providing medical care.
NASA Astrophysics Data System (ADS)
Song, Wei; Zhang, Rui; Zhang, Hao F.; Wei, Qing; Cao, Wenwu
2012-12-01
The physiological and pathological properties of retina are closely associated with various optical contrasts. Hence, integrating different ophthalmic imaging technologies is more beneficial in both fundamental investigation and clinical diagnosis of several blinding diseases. Recently, photoacoustic ophthalmoscopy (PAOM) was developed for in vivo retinal imaging in small animals, which demonstrated the capability of imaging retinal vascular networks and retinal pigment epithelium (RPE) at high sensitivity. We combined PAOM with traditional imaging modalities, such as fluorescein angiography (FA), spectral-domain optical coherence tomography (SD-OCT), and auto-fluorescence scanning laser ophthalmoscopy (AF-SLO), for imaging rats and mice. The multimodal imaging system provided more comprehensive evaluation of the retina based on the complementary imaging contrast mechanisms. The high-quality retinal images show that the integrated ophthalmic imaging system has great potential in the investigation of blinding disorders.
Li, Yang; Ma, Jianguo; Martin, K Heath; Yu, Mingyue; Ma, Teng; Dayton, Paul A; Jiang, Xiaoning; Shung, K Kirk; Zhou, Qifa
2016-09-01
Superharmonic contrast-enhanced ultrasound imaging, also called acoustic angiography, has previously been used for the imaging of microvasculature. This approach excites microbubble contrast agents near their resonance frequency and receives echoes at nonoverlapping superharmonic bandwidths. No integrated system currently exists could fully support this application. To fulfill this need, an integrated dual-channel transmit/receive system for superharmonic imaging was designed, built, and characterized experimentally. The system was uniquely designed for superharmonic imaging and high-resolution B-mode imaging. A complete ultrasound system including a pulse generator, a data acquisition unit, and a signal processing unit were integrated into a single package. The system was controlled by a field-programmable gate array, on which multiple user-defined modes were implemented. A 6-, 35-MHz dual-frequency dual-element intravascular ultrasound transducer was designed and used for imaging. The system successfully obtained high-resolution B-mode images of coronary artery ex vivo with 45-dB dynamic range. The system was capable of acquiring in vitro superharmonic images of a vasa vasorum mimicking phantom with 30-dB contrast. It could detect a contrast agent filled tissue mimicking tube of 200 μm diameter. For the first time, high-resolution B-mode images and superharmonic images were obtained in an intravascular phantom, made possible by the dedicated integrated system proposed. The system greatly reduced the cost and complexity of the superharmonic imaging intended for preclinical study. Significant: The system showed promise for high-contrast intravascular microvascular imaging, which may have significant importance in assessment of the vasa vasorum associated with atherosclerotic plaques.
Flat dielectric metasurface lens array for three dimensional integral imaging
NASA Astrophysics Data System (ADS)
Zhang, Jianlei; Wang, Xiaorui; Yang, Yi; Yuan, Ying; Wu, Xiongxiong
2018-05-01
In conventional integral imaging, the singlet refractive lens array limits the imaging performance due to its prominent aberrations. Different from the refractive lens array relying on phase modulation via phase change accumulated along the optical paths, metasurfaces composed of nano-scatters can produce phase abrupt over the scale of wavelength. In this letter, we propose a novel lens array consisting of two neighboring flat dielectric metasurfaces for integral imaging system. The aspherical phase profiles of the metasurfaces are optimized to improve imaging performance. The simulation results show that our designed 5 × 5 metasurface-based lens array exhibits high image quality at designed wavelength 865 nm.
Integrating digital topology in image-processing libraries.
Lamy, Julien
2007-01-01
This paper describes a method to integrate digital topology informations in image-processing libraries. This additional information allows a library user to write algorithms respecting topological constraints, for example, a seed fill or a skeletonization algorithm. As digital topology is absent from most image-processing libraries, such constraints cannot be fulfilled. We describe and give code samples for all the structures necessary for this integration, and show a use case in the form of a homotopic thinning filter inside ITK. The obtained filter can be up to a hundred times as fast as ITK's thinning filter and works for any image dimension. This paper mainly deals of integration within ITK, but can be adapted with only minor modifications to other image-processing libraries.
Broadband image sensor array based on graphene-CMOS integration
NASA Astrophysics Data System (ADS)
Goossens, Stijn; Navickaite, Gabriele; Monasterio, Carles; Gupta, Shuchi; Piqueras, Juan José; Pérez, Raúl; Burwell, Gregory; Nikitskiy, Ivan; Lasanta, Tania; Galán, Teresa; Puma, Eric; Centeno, Alba; Pesquera, Amaia; Zurutuza, Amaia; Konstantatos, Gerasimos; Koppens, Frank
2017-06-01
Integrated circuits based on complementary metal-oxide-semiconductors (CMOS) are at the heart of the technological revolution of the past 40 years, enabling compact and low-cost microelectronic circuits and imaging systems. However, the diversification of this platform into applications other than microcircuits and visible-light cameras has been impeded by the difficulty to combine semiconductors other than silicon with CMOS. Here, we report the monolithic integration of a CMOS integrated circuit with graphene, operating as a high-mobility phototransistor. We demonstrate a high-resolution, broadband image sensor and operate it as a digital camera that is sensitive to ultraviolet, visible and infrared light (300-2,000 nm). The demonstrated graphene-CMOS integration is pivotal for incorporating 2D materials into the next-generation microelectronics, sensor arrays, low-power integrated photonics and CMOS imaging systems covering visible, infrared and terahertz frequencies.
Dynamic "inline" images: context-sensitive retrieval and integration of images into Web documents.
Kahn, Charles E
2008-09-01
Integrating relevant images into web-based information resources adds value for research and education. This work sought to evaluate the feasibility of using "Web 2.0" technologies to dynamically retrieve and integrate pertinent images into a radiology web site. An online radiology reference of 1,178 textual web documents was selected as the set of target documents. The ARRS GoldMiner image search engine, which incorporated 176,386 images from 228 peer-reviewed journals, retrieved images on demand and integrated them into the documents. At least one image was retrieved in real-time for display as an "inline" image gallery for 87% of the web documents. Each thumbnail image was linked to the full-size image at its original web site. Review of 20 randomly selected Collaborative Hypertext of Radiology documents found that 69 of 72 displayed images (96%) were relevant to the target document. Users could click on the "More" link to search the image collection more comprehensively and, from there, link to the full text of the article. A gallery of relevant radiology images can be inserted easily into web pages on any web server. Indexing by concepts and keywords allows context-aware image retrieval, and searching by document title and subject metadata yields excellent results. These techniques allow web developers to incorporate easily a context-sensitive image gallery into their documents.
Crypto-Watermarking of Transmitted Medical Images.
Al-Haj, Ali; Mohammad, Ahmad; Amer, Alaa'
2017-02-01
Telemedicine is a booming healthcare practice that has facilitated the exchange of medical data and expertise between healthcare entities. However, the widespread use of telemedicine applications requires a secured scheme to guarantee confidentiality and verify authenticity and integrity of exchanged medical data. In this paper, we describe a region-based, crypto-watermarking algorithm capable of providing confidentiality, authenticity, and integrity for medical images of different modalities. The proposed algorithm provides authenticity by embedding robust watermarks in images' region of non-interest using SVD in the DWT domain. Integrity is provided in two levels: strict integrity implemented by a cryptographic hash watermark, and content-based integrity implemented by a symmetric encryption-based tamper localization scheme. Confidentiality is achieved as a byproduct of hiding patient's data in the image. Performance of the algorithm was evaluated with respect to imperceptibility, robustness, capacity, and tamper localization, using different medical images. The results showed the effectiveness of the algorithm in providing security for telemedicine applications.
Kaseno, Kenichi; Hisazaki, Kaori; Nakamura, Kohki; Ikeda, Etsuko; Hasegawa, Kanae; Aoyama, Daisetsu; Shiomi, Yuichiro; Ikeda, Hiroyuki; Morishita, Tetsuji; Ishida, Kentaro; Amaya, Naoki; Uzui, Hiroyasu; Tada, Hiroshi
2018-04-14
Intracardiac echocardiographic (ICE) imaging might be useful for integrating three-dimensional computed tomographic (CT) images for left atrial (LA) catheter navigation during atrial fibrillation (AF) ablation. However, the optimal CT image integration method using ICE has not been established. This study included 52 AF patients who underwent successful circumferential pulmonary vein isolation (CPVI). In all patients, CT image integration was performed after the CPVI with the following two methods: (1) using ICE images of the LA derived from the right atrium and right ventricular outflow tract (RA-merge) and (2) using ICE images of the LA directly derived from the LA added to the image for the RA-merge (LA-merge). The accuracy of these two methods was assessed by the distances between the integrated CT image and ICE image (ICE-to-CT distance), and between the CT image and actual ablated sites for the CPVI (CT-to-ABL distance). The mean ICE-to-CT distance was comparable between the two methods (RA-merge = 1.6 ± 0.5 mm, LA-merge = 1.7 ± 0.4 mm; p = 0.33). However, the mean CT-to-ABL distance was shorter for the LA-merge (2.1 ± 0.6 mm) than RA-merge (2.5 ± 0.8 mm; p < 0.01). The LA, especially the left-sided PVs and LA roof, was more sharply delineated by direct LA imaging, and whereas the greatest CT-to-ABL distance was observed at the roof portion of the left superior PV (3.7 ± 2.8 mm) after the RA-merge, it improved to 2.6 ± 1.9 mm after the LA-merge (p < 0.01). Additional ICE images of the LA directly acquired from the LA might lead to a greater accuracy of the CT image integration for the CVPI.
ERIC Educational Resources Information Center
Peterson, Matthew O.
2016-01-01
Science education researchers have turned their attention to the use of images in textbooks, both because pages are heavily illustrated and because visual literacy is an important aptitude for science students. Text-image integration in the textbook is described here as composition schemes in increasing degrees of integration: prose primary (PP),…
VA's Integrated Imaging System on three platforms.
Dayhoff, R E; Maloney, D L; Majurski, W J
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability.
VA's Integrated Imaging System on three platforms.
Dayhoff, R. E.; Maloney, D. L.; Majurski, W. J.
1992-01-01
The DHCP Integrated Imaging System provides users with integrated patient data including text, image and graphics data. This system has been transferred from its original two screen DOS-based MUMPS platform to an X window workstation and a Microsoft Windows-based workstation. There are differences between these various platforms that impact on software design and on software development strategy. Data structures and conventions were used to isolate hardware, operating system, imaging software, and user-interface differences between platforms in the implementation of functionality for text and image display and interaction. The use of an object-oriented approach greatly increased system portability. PMID:1482983
An integration time adaptive control method for atmospheric composition detection of occultation
NASA Astrophysics Data System (ADS)
Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin
2018-01-01
When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.
Goscinski, Wojtek J.; McIntosh, Paul; Felzmann, Ulrich; Maksimenko, Anton; Hall, Christopher J.; Gureyev, Timur; Thompson, Darren; Janke, Andrew; Galloway, Graham; Killeen, Neil E. B.; Raniga, Parnesh; Kaluza, Owen; Ng, Amanda; Poudel, Govinda; Barnes, David G.; Nguyen, Toan; Bonnington, Paul; Egan, Gary F.
2014-01-01
The Multi-modal Australian ScienceS Imaging and Visualization Environment (MASSIVE) is a national imaging and visualization facility established by Monash University, the Australian Synchrotron, the Commonwealth Scientific Industrial Research Organization (CSIRO), and the Victorian Partnership for Advanced Computing (VPAC), with funding from the National Computational Infrastructure and the Victorian Government. The MASSIVE facility provides hardware, software, and expertise to drive research in the biomedical sciences, particularly advanced brain imaging research using synchrotron x-ray and infrared imaging, functional and structural magnetic resonance imaging (MRI), x-ray computer tomography (CT), electron microscopy and optical microscopy. The development of MASSIVE has been based on best practice in system integration methodologies, frameworks, and architectures. The facility has: (i) integrated multiple different neuroimaging analysis software components, (ii) enabled cross-platform and cross-modality integration of neuroinformatics tools, and (iii) brought together neuroimaging databases and analysis workflows. MASSIVE is now operational as a nationally distributed and integrated facility for neuroinfomatics and brain imaging research. PMID:24734019
Huang, H; Coatrieux, G; Shu, H Z; Luo, L M; Roux, Ch
2011-01-01
In this paper we present a medical image integrity verification system that not only allows detecting and approximating malevolent local image alterations (e.g. removal or addition of findings) but is also capable to identify the nature of global image processing applied to the image (e.g. lossy compression, filtering …). For that purpose, we propose an image signature derived from the geometric moments of pixel blocks. Such a signature is computed over regions of interest of the image and then watermarked in regions of non interest. Image integrity analysis is conducted by comparing embedded and recomputed signatures. If any, local modifications are approximated through the determination of the parameters of the nearest generalized 2D Gaussian. Image moments are taken as image features and serve as inputs to one classifier we learned to discriminate the type of global image processing. Experimental results with both local and global modifications illustrate the overall performances of our approach.
NASA Astrophysics Data System (ADS)
Li, Jiawen; Ma, Teng; Mohar, Dilbahar; Correa, Adrian; Minami, Hataka; Jing, Joseph; Zhou, Qifa; Patel, Pranav M.; Chen, Zhongping
2014-03-01
Intravascular ultrasound (IVUS) imaging and optical coherence tomography (OCT), two commonly used intracoronary imaging modalities, play important roles in plaque evaluation. The combined use of IVUS (to visualize the entire plaque volume) and OCT (to quantify the thickness of the plaque cap, if any) is hypothesized to increase plaque diagnostic accuracy. Our group has developed a fully-integrated dual-modality IVUS-OCT imaging system and 3.6F catheter for simultaneous IVUS-OCT imaging with a high resolution and deep penetration depth. However, the diagnostic accuracy of an integrated IVUS-OCT system has not been investigated. In this study, we imaged 175 coronary artery sites (241 regions of interest) from 20 cadavers using our previous reported integrated IVUS-OCT system. IVUS-OCT images were read by two skilled interventional cardiologists. Each region of interest was classified as either calcification, lipid pool or fibrosis. Comparing the diagnosis by cardiologists using IVUSOCT images with the diagnosis by the pathologist, we calculated the sensitivity and specificity for characterization of calcification, lipid pool or fibrosis with this integrated system. In vitro imaging of cadaver coronary specimens demonstrated the complementary nature of these two modalities for plaques classification. A higher accuracy was shown than using a single modality alone.
Gutman, David A; Cobb, Jake; Somanna, Dhananjaya; Park, Yuna; Wang, Fusheng; Kurc, Tahsin; Saltz, Joel H; Brat, Daniel J; Cooper, Lee A D
2013-01-01
Background The integration and visualization of multimodal datasets is a common challenge in biomedical informatics. Several recent studies of The Cancer Genome Atlas (TCGA) data have illustrated important relationships between morphology observed in whole-slide images, outcome, and genetic events. The pairing of genomics and rich clinical descriptions with whole-slide imaging provided by TCGA presents a unique opportunity to perform these correlative studies. However, better tools are needed to integrate the vast and disparate data types. Objective To build an integrated web-based platform supporting whole-slide pathology image visualization and data integration. Materials and methods All images and genomic data were directly obtained from the TCGA and National Cancer Institute (NCI) websites. Results The Cancer Digital Slide Archive (CDSA) produced is accessible to the public (http://cancer.digitalslidearchive.net) and currently hosts more than 20 000 whole-slide images from 22 cancer types. Discussion The capabilities of CDSA are demonstrated using TCGA datasets to integrate pathology imaging with associated clinical, genomic and MRI measurements in glioblastomas and can be extended to other tumor types. CDSA also allows URL-based sharing of whole-slide images, and has preliminary support for directly sharing regions of interest and other annotations. Images can also be selected on the basis of other metadata, such as mutational profile, patient age, and other relevant characteristics. Conclusions With the increasing availability of whole-slide scanners, analysis of digitized pathology images will become increasingly important in linking morphologic observations with genomic and clinical endpoints. PMID:23893318
CMOS Active-Pixel Image Sensor With Intensity-Driven Readout
NASA Technical Reports Server (NTRS)
Langenbacher, Harry T.; Fossum, Eric R.; Kemeny, Sabrina
1996-01-01
Proposed complementary metal oxide/semiconductor (CMOS) integrated-circuit image sensor automatically provides readouts from pixels in order of decreasing illumination intensity. Sensor operated in integration mode. Particularly useful in number of image-sensing tasks, including diffractive laser range-finding, three-dimensional imaging, event-driven readout of sparse sensor arrays, and star tracking.
Integration of Medical Imaging Including Ultrasound into a New Clinical Anatomy Curriculum
ERIC Educational Resources Information Center
Moscova, Michelle; Bryce, Deborah A.; Sindhusake, Doungkamol; Young, Noel
2015-01-01
In 2008 a new clinical anatomy curriculum with integrated medical imaging component was introduced into the University of Sydney Medical Program. Medical imaging used for teaching the new curriculum included normal radiography, MRI, CT scans, and ultrasound imaging. These techniques were incorporated into teaching over the first two years of the…
Integrated imaging of cardiac anatomy, physiology, and viability.
Arrighi, James A
2009-03-01
Technologic developments in imaging will have a significant impact on cardiac imaging over the next decade. These advances will permit more detailed assessment of cardiac anatomy, complex assessment of cardiac physiology, and integration of anatomic and physiologic data. The distinction between anatomic and physiologic imaging is important. For assessing patients with known or suspected coronary artery disease, physiologic and anatomic imaging data are complementary. The strength of anatomic imaging rests in its ability to detect the presence of disease, whereas physiologic imaging techniques assess the impact of disease, such as whether a coronary atherosclerotic lesion limits myocardial blood flow. Research indicates that physiologic data are more prognostically important than anatomic data, but both may be important in patient management decisions. Integrated cardiac imaging is an evolving field, with many potential indications. These include assessment of coronary stenosis, myocardial viability, anatomic and physiologic characterization of atherosclerotic plaque, and advanced molecular imaging.
Strengthening your ties to referring physicians through RIS/PACS integration.
Worthy, Susan; Rounds, Karla C; Soloway, Connie B
2003-01-01
Many imaging centers are turning to technology solutions to increase refering physician satisfaction, implementing such enhancements as automated report distribution, picture archiving and communications system (PACS), radiology information systems (RIS), and web-based results access. However, without seamless integration, these technology investments don't address the challenge at its core: convenient and reliable, two-way communication and interaction with referring physicians. In an integrated RIS/PACS solution, patient tracking in the RIS and PACS study status are logged and available to users. The time of the patient's registration at the imaging center, the exam start and completion time, the patient's departure time from the imaging center, and results status are all tracked and logged. An integrated RIS/PACS solution provides additional support to the radiologist, a critical factor that can improve the turnaround time of results to referring physicians. The RIS/PACS enhances the interpretation by providing the patient's history, which gives the radiologist additional insight and decreases the likelihood of missing a diagnostic element. In a tightly integrated RIS/PACS solution, results information is more complete. Physicians can view reports with associated images selected by the radiologist. They will also have full order information and complete imaging history including prior reports and images. Referring physicians can access and view images and exam notes at the same time that the radiologist is interpreting the exam. Without the benefit of an integrated RIS/PACS system, the referring physician would have to wait for the signed transcription to be released. In a seamlessly integrated solution, film-tracking modules within the RIS are fused with digital imaging workflow in the PACS. Users can see at a glance if a historical exam is available on film and benefit when a complete study history--both film-based and digital--is presented with the current case. It is up to the imaging center to market the benefits of reduced errors, reduced turnaround times, and a higher level of service to referring physician community, and encourage them to take advantage of the convenience it provides. The savvy imaging center will also regard the integrated RIS/PACS as a valuable marketing tool for use in attracting radiologists.
USB video image controller used in CMOS image sensor
NASA Astrophysics Data System (ADS)
Zhang, Wenxuan; Wang, Yuxia; Fan, Hong
2002-09-01
CMOS process is mainstream technique in VLSI, possesses high integration. SE402 is multifunction microcontroller, which integrates image data I/O ports, clock control, exposure control and digital signal processing into one chip. SE402 reduces the number of chips and PCB's room. The paper studies emphatically on USB video image controller used in CMOS image sensor and give the application on digital still camera.
Dictionary-based image reconstruction for superresolution in integrated circuit imaging.
Cilingiroglu, T Berkin; Uyar, Aydan; Tuysuzoglu, Ahmet; Karl, W Clem; Konrad, Janusz; Goldberg, Bennett B; Ünlü, M Selim
2015-06-01
Resolution improvement through signal processing techniques for integrated circuit imaging is becoming more crucial as the rapid decrease in integrated circuit dimensions continues. Although there is a significant effort to push the limits of optical resolution for backside fault analysis through the use of solid immersion lenses, higher order laser beams, and beam apodization, signal processing techniques are required for additional improvement. In this work, we propose a sparse image reconstruction framework which couples overcomplete dictionary-based representation with a physics-based forward model to improve resolution and localization accuracy in high numerical aperture confocal microscopy systems for backside optical integrated circuit analysis. The effectiveness of the framework is demonstrated on experimental data.
Shen, Xin; Javidi, Bahram
2018-03-01
We have developed a three-dimensional (3D) dynamic integral-imaging (InIm)-system-based optical see-through augmented reality display with enhanced depth range of a 3D augmented image. A focus-tunable lens is adopted in the 3D display unit to relay the elemental images with various positions to the micro lens array. Based on resolution priority integral imaging, multiple lenslet image planes are generated to enhance the depth range of the 3D image. The depth range is further increased by utilizing both the real and virtual 3D imaging fields. The 3D reconstructed image and the real-world scene are overlaid using an optical see-through display for augmented reality. The proposed system can significantly enhance the depth range of a 3D reconstructed image with high image quality in the micro InIm unit. This approach provides enhanced functionality for augmented information and adjusts the vergence-accommodation conflict of a traditional augmented reality display.
Imaging of common breast implants and implant-related complications: A pictorial essay
Shah, Amisha T; Jankharia, Bijal B
2016-01-01
The number of women undergoing breast implant procedures is increasing exponentially. It is, therefore, imperative for a radiologist to be familiar with the normal and abnormal imaging appearances of common breast implants. Diagnostic imaging studies such as mammography, ultrasonography, and magnetic resonance imaging are used to evaluate implant integrity, detect abnormalities of the implant and its surrounding capsule, and detect breast conditions unrelated to implants. Magnetic resonance imaging of silicone breast implants, with its high sensitivity and specificity for detecting implant rupture, is the most reliable modality to asses implant integrity. Whichever imaging modality is used, the overall aim of imaging breast implants is to provide the pertinent information about implant integrity, detect implant failures, and to detect breast conditions unrelated to the implants, such as cancer. PMID:27413269
Imaging of common breast implants and implant-related complications: A pictorial essay.
Shah, Amisha T; Jankharia, Bijal B
2016-01-01
The number of women undergoing breast implant procedures is increasing exponentially. It is, therefore, imperative for a radiologist to be familiar with the normal and abnormal imaging appearances of common breast implants. Diagnostic imaging studies such as mammography, ultrasonography, and magnetic resonance imaging are used to evaluate implant integrity, detect abnormalities of the implant and its surrounding capsule, and detect breast conditions unrelated to implants. Magnetic resonance imaging of silicone breast implants, with its high sensitivity and specificity for detecting implant rupture, is the most reliable modality to asses implant integrity. Whichever imaging modality is used, the overall aim of imaging breast implants is to provide the pertinent information about implant integrity, detect implant failures, and to detect breast conditions unrelated to the implants, such as cancer.
Testbed Experiment for SPIDER: A Photonic Integrated Circuit-based Interferometric imaging system
NASA Astrophysics Data System (ADS)
Badham, K.; Duncan, A.; Kendrick, R. L.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Thurman, S. T.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. In this paper we describe the photonic integrated circuit design and the testbed used to create the first images of extended scenes. We summarize the image reconstruction steps and present the final images. We also describe our next generation PIC design for a larger (16x area, 4x field of view) image.
A 3D image sensor with adaptable charge subtraction scheme for background light suppression
NASA Astrophysics Data System (ADS)
Shin, Jungsoon; Kang, Byongmin; Lee, Keechang; Kim, James D. K.
2013-02-01
We present a 3D ToF (Time-of-Flight) image sensor with adaptive charge subtraction scheme for background light suppression. The proposed sensor can alternately capture high resolution color image and high quality depth map in each frame. In depth-mode, the sensor requires enough integration time for accurate depth acquisition, but saturation will occur in high background light illumination. We propose to divide the integration time into N sub-integration times adaptively. In each sub-integration time, our sensor captures an image without saturation and subtracts the charge to prevent the pixel from the saturation. In addition, the subtraction results are cumulated N times obtaining a final result image without background illumination at full integration time. Experimental results with our own ToF sensor show high background suppression performance. We also propose in-pixel storage and column-level subtraction circuit for chiplevel implementation of the proposed method. We believe the proposed scheme will enable 3D sensors to be used in out-door environment.
Towards Silicon-Based Longwave Integrated Optoelectronics (LIO)
2008-01-21
circuitry. The photonics can use, for example, microbolometers and III-V photodetectors as well as III-V interband cascade and quantum cascade lasers...chips using inputs from several sensors. (4) imaging: focal - plane - array imager with integral readout, infrared-to-visible image converter chip, (5... photodetectors , type II interband cascades and QCLs. I would integrate the cascades in LIO using a technique similar to that developed by John Bower’s
Resolution enhancement in integral microscopy by physical interpolation.
Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel
2015-08-01
Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.
Resolution enhancement in integral microscopy by physical interpolation
Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel
2015-01-01
Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens. PMID:26309749
Integration of LDSE and LTVS logs with HIPAA compliant auditing system (HCAS)
NASA Astrophysics Data System (ADS)
Zhou, Zheng; Liu, Brent J.; Huang, H. K.; Guo, Bing; Documet, Jorge; King, Nelson
2006-03-01
The deadline of HIPAA (Health Insurance Portability and Accountability Act) Security Rules has passed on February 2005; therefore being HIPAA compliant becomes extremely critical to healthcare providers. HIPAA mandates healthcare providers to protect the privacy and integrity of the health data and have the ability to demonstrate examples of mechanisms that can be used to accomplish this task. It is also required that a healthcare institution must be able to provide audit trails on image data access on demand for a specific patient. For these reasons, we have developed a HIPAA compliant auditing system (HCAS) for image data security in a PACS by auditing every image data access. The HCAS was presented in 2005 SPIE. This year, two new components, LDSE (Lossless Digital Signature Embedding) and LTVS (Patient Location Tracking and Verification System) logs, have been added to the HCAS. The LDSE can assure medical image integrity in a PACS, while the LTVS can provide access control for a PACS by creating a security zone in the clinical environment. By integrating the LDSE and LTVS logs with the HCAS, the privacy and integrity of image data can be audited as well. Thus, a PACS with the HCAS installed can become HIPAA compliant in image data privacy and integrity, access control, and audit control.
Jiang, Weiping; Wang, Li; Niu, Xiaoji; Zhang, Quan; Zhang, Hui; Tang, Min; Hu, Xiangyun
2014-01-01
A high-precision image-aided inertial navigation system (INS) is proposed as an alternative to the carrier-phase-based differential Global Navigation Satellite Systems (CDGNSSs) when satellite-based navigation systems are unavailable. In this paper, the image/INS integrated algorithm is modeled by a tightly-coupled iterative extended Kalman filter (IEKF). Tightly-coupled integration ensures that the integrated system is reliable, even if few known feature points (i.e., less than three) are observed in the images. A new global observability analysis of this tightly-coupled integration is presented to guarantee that the system is observable under the necessary conditions. The analysis conclusions were verified by simulations and field tests. The field tests also indicate that high-precision position (centimeter-level) and attitude (half-degree-level)-integrated solutions can be achieved in a global reference. PMID:25330046
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
CMOS active pixel sensor type imaging system on a chip
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)
2011-01-01
A single chip camera which includes an .[.intergrated.]. .Iadd.integrated .Iaddend.image acquisition portion and control portion and which has double sampling/noise reduction capabilities thereon. Part of the .[.intergrated.]. .Iadd.integrated .Iaddend.structure reduces the noise that is picked up during imaging.
Providing integrity, authenticity, and confidentiality for header and pixel data of DICOM images.
Al-Haj, Ali
2015-04-01
Exchange of medical images over public networks is subjected to different types of security threats. This has triggered persisting demands for secured telemedicine implementations that will provide confidentiality, authenticity, and integrity for the transmitted images. The medical image exchange standard (DICOM) offers mechanisms to provide confidentiality for the header data of the image but not for the pixel data. On the other hand, it offers mechanisms to achieve authenticity and integrity for the pixel data but not for the header data. In this paper, we propose a crypto-based algorithm that provides confidentially, authenticity, and integrity for the pixel data, as well as for the header data. This is achieved by applying strong cryptographic primitives utilizing internally generated security data, such as encryption keys, hashing codes, and digital signatures. The security data are generated internally from the header and the pixel data, thus a strong bond is established between the DICOM data and the corresponding security data. The proposed algorithm has been evaluated extensively using DICOM images of different modalities. Simulation experiments show that confidentiality, authenticity, and integrity have been achieved as reflected by the results we obtained for normalized correlation, entropy, PSNR, histogram analysis, and robustness.
Lyubimov, Artem Y; Uervirojnangkoorn, Monarin; Zeldin, Oliver B; Brewster, Aaron S; Murray, Thomas D; Sauter, Nicholas K; Berger, James M; Weis, William I; Brunger, Axel T
2016-06-01
Serial femtosecond crystallography (SFX) uses an X-ray free-electron laser to extract diffraction data from crystals not amenable to conventional X-ray light sources owing to their small size or radiation sensitivity. However, a limitation of SFX is the high variability of the diffraction images that are obtained. As a result, it is often difficult to determine optimal indexing and integration parameters for the individual diffraction images. Presented here is a software package, called IOTA , which uses a grid-search technique to determine optimal spot-finding parameters that can in turn affect the success of indexing and the quality of integration on an image-by-image basis. Integration results can be filtered using a priori information about the Bravais lattice and unit-cell dimensions and analyzed for unit-cell isomorphism, facilitating an improvement in subsequent data-processing steps.
3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging
NASA Astrophysics Data System (ADS)
Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak
2017-10-01
Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.
Sparse models for correlative and integrative analysis of imaging and genetic data
Lin, Dongdong; Cao, Hongbao; Calhoun, Vince D.
2014-01-01
The development of advanced medical imaging technologies and high-throughput genomic measurements has enhanced our ability to understand their interplay as well as their relationship with human behavior by integrating these two types of datasets. However, the high dimensionality and heterogeneity of these datasets presents a challenge to conventional statistical methods; there is a high demand for the development of both correlative and integrative analysis approaches. Here, we review our recent work on developing sparse representation based approaches to address this challenge. We show how sparse models are applied to the correlation and integration of imaging and genetic data for biomarker identification. We present examples on how these approaches are used for the detection of risk genes and classification of complex diseases such as schizophrenia. Finally, we discuss future directions on the integration of multiple imaging and genomic datasets including their interactions such as epistasis. PMID:25218561
Multiscale Integration of -Omic, Imaging, and Clinical Data in Biomedical Informatics
Phan, John H.; Quo, Chang F.; Cheng, Chihwen; Wang, May Dongmei
2016-01-01
This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data. PMID:23231990
Multiscale integration of -omic, imaging, and clinical data in biomedical informatics.
Phan, John H; Quo, Chang F; Cheng, Chihwen; Wang, May Dongmei
2012-01-01
This paper reviews challenges and opportunities in multiscale data integration for biomedical informatics. Biomedical data can come from different biological origins, data acquisition technologies, and clinical applications. Integrating such data across multiple scales (e.g., molecular, cellular/tissue, and patient) can lead to more informed decisions for personalized, predictive, and preventive medicine. However, data heterogeneity, community standards in data acquisition, and computational complexity are big challenges for such decision making. This review describes genomic and proteomic (i.e., molecular), histopathological imaging (i.e., cellular/tissue), and clinical (i.e., patient) data; it includes case studies for single-scale (e.g., combining genomic or histopathological image data), multiscale (e.g., combining histopathological image and clinical data), and multiscale and multiplatform (e.g., the Human Protein Atlas and The Cancer Genome Atlas) data integration. Numerous opportunities exist in biomedical informatics research focusing on integration of multiscale and multiplatform data.
Welter, Petra; Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process.
Riesmeier, Jörg; Fischer, Benedikt; Grouls, Christoph; Kuhl, Christiane; Deserno (né Lehmann), Thomas M
2011-01-01
It is widely accepted that content-based image retrieval (CBIR) can be extremely useful for computer-aided diagnosis (CAD). However, CBIR has not been established in clinical practice yet. As a widely unattended gap of integration, a unified data concept for CBIR-based CAD results and reporting is lacking. Picture archiving and communication systems and the workflow of radiologists must be considered for successful data integration to be achieved. We suggest that CBIR systems applied to CAD should integrate their results in a picture archiving and communication systems environment such as Digital Imaging and Communications in Medicine (DICOM) structured reporting documents. A sample DICOM structured reporting template adaptable to CBIR and an appropriate integration scheme is presented. The proposed CBIR data concept may foster the promulgation of CBIR systems in clinical environments and, thereby, improve the diagnostic process. PMID:21672913
FRIDA: diffraction-limited imaging and integral-field spectroscopy for the GTC
NASA Astrophysics Data System (ADS)
Watson, Alan M.; Acosta-Pulido, José A.; Álvarez-Núñez, Luis C.; Bringas-Rico, Vicente; Cardiel, Nicolás.; Cardona, Salvador; Chapa, Oscar; Díaz García, José Javier; Eikenberry, Stephen S.; Espejo, Carlos; Flores-Meza, Rubén. A.; Fuentes-Fernández, Jorge; Gallego, Jesús; Garcés Medina, José Leonardo; Garzón López, Francisco; Hammersley, Peter; Keiman, Carolina; Lara, Gerardo; López, José Alberto; López, Pablo L.; Lucero, Diana; Moreno Arce, Heidy; Pascual Ramirez, Sergio; Patrón Recio, Jesús; Prieto, Almudena; Rodríguez, Alberto José; Marco de la Rosa, José; Sánchez, Beatriz; Uribe, Jorge A.; Váldez Berriozabal, Francisco
2016-08-01
FRIDA is a diffraction-limited imager and integral-field spectrometer that is being built for the adaptive-optics focus of the Gran Telescopio Canarias. In imaging mode FRIDA will provide scales of 0.010, 0.020 and 0.040 arcsec/pixel and in IFS mode spectral resolutions of 1500, 4000 and 30,000. FRIDA is starting systems integration and is scheduled to complete fully integrated system tests at the laboratory by the end of 2017 and to be delivered to GTC shortly thereafter. In this contribution we present a summary of its design, fabrication, current status and potential scientific applications.
NASA Astrophysics Data System (ADS)
Paramanandham, Nirmala; Rajendiran, Kishore
2018-01-01
A novel image fusion technique is presented for integrating infrared and visible images. Integration of images from the same or various sensing modalities can deliver the required information that cannot be delivered by viewing the sensor outputs individually and consecutively. In this paper, a swarm intelligence based image fusion technique using discrete cosine transform (DCT) domain is proposed for surveillance application which integrates the infrared image with the visible image for generating a single informative fused image. Particle swarm optimization (PSO) is used in the fusion process for obtaining the optimized weighting factor. These optimized weighting factors are used for fusing the DCT coefficients of visible and infrared images. Inverse DCT is applied for obtaining the initial fused image. An enhanced fused image is obtained through adaptive histogram equalization for a better visual understanding and target detection. The proposed framework is evaluated using quantitative metrics such as standard deviation, spatial frequency, entropy and mean gradient. The experimental results demonstrate the outperformance of the proposed algorithm over many other state- of- the- art techniques reported in literature.
NASA Astrophysics Data System (ADS)
Dadkhah, Arash; Zhou, Jun; Yeasmin, Nusrat; Jiao, Shuliang
2018-02-01
Various optical imaging modalities with different optical contrast mechanisms have been developed over the past years. Although most of these imaging techniques are being used in many biomedical applications and researches, integration of these techniques will allow researchers to reach the full potential of these technologies. Nevertheless, combining different imaging techniques is always challenging due to the difference in optical and hardware requirements for different imaging systems. Here, we developed a multimodal optical imaging system with the capability of providing comprehensive structural, functional and molecular information of living tissue in micrometer scale. This imaging system integrates photoacoustic microscopy (PAM), optical coherence tomography (OCT), optical Doppler tomography (ODT) and fluorescence microscopy in one platform. Optical-resolution PAM (OR-PAM) provides absorption-based imaging of biological tissues. Spectral domain OCT is able to provide structural information based on the scattering property of biological sample with no need for exogenous contrast agents. In addition, ODT is a functional extension of OCT with the capability of measurement and visualization of blood flow based on the Doppler effect. Fluorescence microscopy allows to reveal molecular information of biological tissue using autofluoresce or exogenous fluorophores. In-vivo as well as ex-vivo imaging studies demonstrated the capability of our multimodal imaging system to provide comprehensive microscopic information on biological tissues. Integrating all the aforementioned imaging modalities for simultaneous multimodal imaging has promising potential for preclinical research and clinical practice in the near future.
Evaluation of DICOM viewer software for workflow integration in clinical trials
NASA Astrophysics Data System (ADS)
Haak, Daniel; Page, Charles E.; Kabino, Klaus; Deserno, Thomas M.
2015-03-01
The digital imaging and communications in medicine (DICOM) protocol is nowadays the leading standard for capture, exchange and storage of image data in medical applications. A broad range of commercial, free, and open source software tools supporting a variety of DICOM functionality exists. However, different from patient's care in hospital, DICOM has not yet arrived in electronic data capture systems (EDCS) for clinical trials. Due to missing integration, even just the visualization of patient's image data in electronic case report forms (eCRFs) is impossible. Four increasing levels for integration of DICOM components into EDCS are conceivable, raising functionality but also demands on interfaces with each level. Hence, in this paper, a comprehensive evaluation of 27 DICOM viewer software projects is performed, investigating viewing functionality as well as interfaces for integration. Concerning general, integration, and viewing requirements the survey involves the criteria (i) license, (ii) support, (iii) platform, (iv) interfaces, (v) two-dimensional (2D) and (vi) three-dimensional (3D) image viewing functionality. Optimal viewers are suggested for applications in clinical trials for 3D imaging, hospital communication, and workflow. Focusing on open source solutions, the viewers ImageJ and MicroView are superior for 3D visualization, whereas GingkoCADx is advantageous for hospital integration. Concerning workflow optimization in multi-centered clinical trials, we suggest the open source viewer Weasis. Covering most use cases, an EDCS and PACS interconnection with Weasis is suggested.
Scalable Integrated Region-Based Image Retrieval Using IRM and Statistical Clustering.
ERIC Educational Resources Information Center
Wang, James Z.; Du, Yanping
Statistical clustering is critical in designing scalable image retrieval systems. This paper presents a scalable algorithm for indexing and retrieving images based on region segmentation. The method uses statistical clustering on region features and IRM (Integrated Region Matching), a measure developed to evaluate overall similarity between images…
Integral image rendering procedure for aberration correction and size measurement.
Sommer, Holger; Ihrig, Andreas; Ebenau, Melanie; Flühs, Dirk; Spaan, Bernhard; Eichmann, Marion
2014-05-20
The challenge in rendering integral images is to use as much information preserved by the light field as possible to reconstruct a captured scene in a three-dimensional way. We propose a rendering algorithm based on the projection of rays through a detailed simulation of the optical path, considering all the physical properties and locations of the optical elements. The rendered images contain information about the correct size of imaged objects without the need to calibrate the imaging device. Additionally, aberrations of the optical system may be corrected, depending on the setup of the integral imaging device. We show simulation data that illustrates the aberration correction ability and experimental data from our plenoptic camera, which illustrates the capability of our proposed algorithm to measure size and distance. We believe this rendering procedure will be useful in the future for three-dimensional ophthalmic imaging of the human retina.
Satellite image fusion based on principal component analysis and high-pass filtering.
Metwalli, Mohamed R; Nasr, Ayman H; Allah, Osama S Farag; El-Rabaie, S; Abd El-Samie, Fathi E
2010-06-01
This paper presents an integrated method for the fusion of satellite images. Several commercial earth observation satellites carry dual-resolution sensors, which provide high spatial resolution or simply high-resolution (HR) panchromatic (pan) images and low-resolution (LR) multi-spectral (MS) images. Image fusion methods are therefore required to integrate a high-spectral-resolution MS image with a high-spatial-resolution pan image to produce a pan-sharpened image with high spectral and spatial resolutions. Some image fusion methods such as the intensity, hue, and saturation (IHS) method, the principal component analysis (PCA) method, and the Brovey transform (BT) method provide HR MS images, but with low spectral quality. Another family of image fusion methods, such as the high-pass-filtering (HPF) method, operates on the basis of the injection of high frequency components from the HR pan image into the MS image. This family of methods provides less spectral distortion. In this paper, we propose the integration of the PCA method and the HPF method to provide a pan-sharpened MS image with superior spatial resolution and less spectral distortion. The experimental results show that the proposed fusion method retains the spectral characteristics of the MS image and, at the same time, improves the spatial resolution of the pan-sharpened image.
ERIC Educational Resources Information Center
Dalvit, Silvia; Eimer, Martin
2011-01-01
Previous research has shown that the detection of a visual target can be guided not only by the temporal integration of two percepts, but also by integrating a percept and an image held in working memory. Behavioral and event-related brain potential (ERP) measures were obtained in a target detection task that required temporal integration of 2…
Diagnostic report acquisition unit for the Mayo/IBM PACS project
NASA Astrophysics Data System (ADS)
Brooks, Everett G.; Rothman, Melvyn L.
1991-07-01
The Mayo Clinic and IBM Rochester have jointly developed a picture archive and control system (PACS) for use with Mayo's MRI and Neuro-CT imaging modalities. One of the challenges of developing a useful PACS involves integrating the diagnostic reports with the electronic images so they can be displayed simultaneously. By the time a diagnostic report is generated for a particular case, its images have already been captured and archived by the PACS. To integrate the report with the images, the authors have developed an IBM Personal System/2 computer (PS/2) based diagnostic report acquisition unit (RAU). A typed copy of the report is transmitted via facsimile to the RAU where it is stacked electronically with other reports that have been sent previously but not yet processed. By processing these reports at the RAU, the information they contain is integrated with the image database and a copy of the report is archived electronically on an IBM Application System/400 computer (AS/400). When a user requests a set of images for viewing, the report is automatically integrated with the image data. By using a hot key, the user can toggle on/off the report on the display screen. This report describes process, hardware, and software employed to integrate the diagnostic report information into the PACS, including how the report images are captured, transmitted, and entered into the AS/400 database. Also described is how the archived reports and their associated medical images are located and merged for retrieval and display. The methods used to detect and process error conditions are also discussed.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
Information management of a department of diagnostic imaging.
Vincenzoni, M; Campioni, P; Vecchioli Scaldazza, A; Capocasa, G; Marano, P
1998-01-01
It is well-known that while RIS allows the management of all input and output data of a Radiology service, PACS plays a major role in the management of all radiologic images. However, the two systems should be closely integrated: scheduling of a radiologic exam requires direct automated integration with the system of image management for retrieval of previous exams and storage of the exam just completed. A modern information system of integration of data and radiologic images should be based on an automated work flow management in al its components, being at the same time flexible and compatible with the ward organization to support and computerize each stage of the working process. Similarly, standard protocols (DICOM 3.0, HL7) defined for interfacing with the Diagnostic Imaging (D.I.) department and the other components of modules of a modern HIS, should be used. They ensure the system to be expandable and accessible to ensure share and integration of information with HIS, emergency service or wards. Correct RIS/PACS integration allows a marked improvement in the efficiency of a modern D.I. department with a positive impact on the daily activity, prompt availability of previous data and images with sophisticated handling of diagnostic images to enhance the reporting quality. The increased diffusion of internet and intranet technology predicts future developments still to be discovered.
Shao, Xiaozhuo; Zheng, Wei; Huang, Zhiwei
2010-11-08
We evaluate the diagnostic feasibility of the integrated polarized near-infrared (NIR) autofluorescence (AF) and NIR diffuse reflectance (DR) imaging technique developed for colonic cancer detection. A total of 48 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated NIR DR (850-1100 nm) and NIR AF imaging at the 785 nm laser excitation. The results showed that NIR AF intensities of cancer tissues are significantly lower than those of normal tissues (p<0.001, paired 2-sided Student's t-test, n=48). NIR AF imaging under polarization conditions gives a higher diagnostic accuracy (of ~92-94%) compared to non-polarized NIR AF imaging or NIR DR imaging. Further, the ratio imaging of NIR DR to NIR AF with polarization provides the best diagnostic accuracy (of ~96%) among the NIR AF and NIR DR imaging techniques. This work suggests that the integrated NIR AF/DR imaging under polarization condition has the potential to improve the early diagnosis and detection of malignant lesions in the colon.
Integration of radiographic images with an electronic medical record.
Overhage, J. M.; Aisen, A.; Barnes, M.; Tucker, M.; McDonald, C. J.
2001-01-01
Radiographic images are important and expensive diagnostic tests. However, the provider caring for the patient often does not review the images directly due to time constraints. Institutions can use picture archiving and communications systems to make images more available to the provider, but this may not be the best solution. We integrated radiographic image review into the Regenstrief Medical Record System in order to address this problem. To achieve adequate performance, we store JPEG compressed images directly in the RMRS. Currently, physicians review about 5% of all radiographic studies using the RMRS image review function. PMID:11825241
Full-parallax 3D display from stereo-hybrid 3D camera system
NASA Astrophysics Data System (ADS)
Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel
2018-04-01
In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.
Multiresolution image gathering and restoration
NASA Technical Reports Server (NTRS)
Fales, Carl L.; Huck, Friedrich O.; Alter-Gartenberg, Rachel; Rahman, Zia-Ur
1992-01-01
In this paper we integrate multiresolution decomposition with image gathering and restoration. This integration leads to a Wiener-matrix filter that accounts for the aliasing, blurring, and noise in image gathering, together with the digital filtering and decimation in signal decomposition. Moreover, as implemented here, the Wiener-matrix filter completely suppresses the blurring and raster effects of the image-display device. We demonstrate that this filter can significantly improve the fidelity and visual quality produced by conventional image reconstruction. The extent of this improvement, in turn, depends on the design of the image-gathering device.
Device for wavelength-selective imaging
Frangioni, John V.
2010-09-14
An imaging device captures both a visible light image and a diagnostic image, the diagnostic image corresponding to emissions from an imaging medium within the object. The visible light image (which may be color or grayscale) and the diagnostic image may be superimposed to display regions of diagnostic significance within a visible light image. A number of imaging media may be used according to an intended application for the imaging device, and an imaging medium may have wavelengths above, below, or within the visible light spectrum. The devices described herein may be advantageously packaged within a single integrated device or other solid state device, and/or employed in an integrated, single-camera medical imaging system, as well as many non-medical imaging systems that would benefit from simultaneous capture of visible-light wavelength images along with images at other wavelengths.
Integration of retinal image sequences
NASA Astrophysics Data System (ADS)
Ballerini, Lucia
1998-10-01
In this paper a method for noise reduction in ocular fundus image sequences is described. The eye is the only part of the human body where the capillary network can be observed along with the arterial and venous circulation using a non invasive technique. The study of the retinal vessels is very important both for the study of the local pathology (retinal disease) and for the large amount of information it offers on systematic haemodynamics, such as hypertension, arteriosclerosis, and diabetes. In this paper a method for image integration of ocular fundus image sequences is described. The procedure can be divided in two step: registration and fusion. First we describe an automatic alignment algorithm for registration of ocular fundus images. In order to enhance vessel structures, we used a spatially oriented bank of filters designed to match the properties of the objects of interest. To evaluate interframe misalignment we adopted a fast cross-correlation algorithm. The performances of the alignment method have been estimated by simulating shifts between image pairs and by using a cross-validation approach. Then we propose a temporal integration technique of image sequences so as to compute enhanced pictures of the overall capillary network. Image registration is combined with image enhancement by fusing subsequent frames of a same region. To evaluate the attainable results, the signal-to-noise ratio was estimated before and after integration. Experimental results on synthetic images of vessel-like structures with different kind of Gaussian additive noise as well as on real fundus images are reported.
Integrated clinical workstations for image and text data capture, display, and teleconsultation.
Dayhoff, R; Kuzmak, P M; Kirin, G
1994-01-01
The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway.
Advanced image based methods for structural integrity monitoring: Review and prospects
NASA Astrophysics Data System (ADS)
Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.
2018-02-01
There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.
NASA Astrophysics Data System (ADS)
Dayhoff, Ruth E.; Maloney, Daniel L.
1990-08-01
The effective delivery of health care has become increasingly dependent on a wide range of medical data which includes a variety of images. Manual and computer-based medical records ordinarily do not contain image data, leaving the physician to deal with a fragmented patient record widely scattered throughout the hospital. The Department of Veterans Affairs (VA) is currently installing a prototype hospital information system (HIS) workstation network to demonstrate the feasibility of providing image management and communications (IMAC) functionality as an integral part of an existing hospital information system. The core of this system is a database management system adapted to handle images as a new data type. A general model for this integration is discussed and specifics of the hospital-wide network of image display workstations are given.
Ginat, Daniel Thomas; Anthony, Gregory J; Christoforidis, Gregory; Oto, Aytekin; Dalag, Leonard; Sammet, Steffen
2018-02-01
The purpose of this study is to compare the image quality of magnetic resonance (MR) treatment planning images and proton resonance frequency (PRF) shift thermography images and inform coil selection for MR-guided laser ablation of tumors in the head and neck region. Laser ablation was performed on an agar phantom and monitored via MR PRF shift thermography on a 3-T scanner, following acquisition of T1-weighted (T1W) planning images. PRF shift thermography images and T2-weighted (T2W) planning images were also performed in the neck region of five normal human volunteers. Signal-to-noise ratios (SNR) and temperature uncertainty were calculated and compared between scans acquired with the quadrature mode body integrated coil and a head and neck neurovascular coil. T1W planning images of the agar phantom produced SNRs of 4.0 and 12.2 for the quadrature mode body integrated coil and head and neck neurovascular coil, respectively. The SNR of the phantom MR thermography magnitude images obtained using the quadrature mode body integrated coil was 14.4 versus 59.6 using the head and neck coil. The average temperature uncertainty for MR thermography performed on the phantom with the quadrature mode body integrated coil was 1.1 versus 0.3 °C with the head and neck coil. T2W planning images of the neck in five human volunteers produced SNRs of 28.3 and 91.0 for the quadrature mode body integrated coil and head and neck coil, respectively. MR thermography magnitude images of the neck in the volunteers obtained using the quadrature mode body integrated coil had a signal-to-noise ratio of 8.3, while the SNR using the head and neck coil was 16.1. The average temperature uncertainty for MR thermography performed on the volunteers with the body coil was 2.5 versus 1.6 °C with the head and neck neurovascular coil. The quadrature mode body integrated coil provides inferior image quality for both basic treatment planning sequences and MR PRF shift thermography compared with a neurovascular coil, but may nevertheless be adequate for clinical purposes.
A fast non-local means algorithm based on integral image and reconstructed similar kernel
NASA Astrophysics Data System (ADS)
Lin, Zheng; Song, Enmin
2018-03-01
Image denoising is one of the essential methods in digital image processing. The non-local means (NLM) denoising approach is a remarkable denoising technique. However, its time complexity of the computation is high. In this paper, we design a fast NLM algorithm based on integral image and reconstructed similar kernel. First, the integral image is introduced in the traditional NLM algorithm. In doing so, it reduces a great deal of repetitive operations in the parallel processing, which will greatly improves the running speed of the algorithm. Secondly, in order to amend the error of the integral image, we construct a similar window resembling the Gaussian kernel in the pyramidal stacking pattern. Finally, in order to eliminate the influence produced by replacing the Gaussian weighted Euclidean distance with Euclidean distance, we propose a scheme to construct a similar kernel with a size of 3 x 3 in a neighborhood window which will reduce the effect of noise on a single pixel. Experimental results demonstrate that the proposed algorithm is about seventeen times faster than the traditional NLM algorithm, yet produce comparable results in terms of Peak Signal-to- Noise Ratio (the PSNR increased 2.9% in average) and perceptual image quality.
Predicting neuropathic ulceration: analysis of static temperature distributions in thermal images
NASA Astrophysics Data System (ADS)
Kaabouch, Naima; Hu, Wen-Chen; Chen, Yi; Anderson, Julie W.; Ames, Forrest; Paulson, Rolf
2010-11-01
Foot ulcers affect millions of Americans annually. Conventional methods used to assess skin integrity, including inspection and palpation, may be valuable approaches, but they usually do not detect changes in skin integrity until an ulcer has already developed. We analyze the feasibility of thermal imaging as a technique to assess the integrity of the skin and its many layers. Thermal images are analyzed using an asymmetry analysis, combined with a genetic algorithm, to examine the infrared images for early detection of foot ulcers. Preliminary results show that the proposed technique can reliably and efficiently detect inflammation and hence effectively predict potential ulceration.
Integrated NDVI images for Niger 1986-1987. [Normalized Difference Vegetation Index
NASA Technical Reports Server (NTRS)
Harrington, John A., Jr.; Wylie, Bruce K.; Tucker, Compton J.
1988-01-01
Two NOAA AVHRR images are presented which provide a comparison of the geographic distribution of an integration of the normalized difference vegetation index (NDVI) for the Sahel zone in Niger for the growing seasons of 1986 and 1987. The production of the images and the application of the images for resource management are discussed. Daily large area coverage with a spatial resolution of 1.1 km at nadir were transformed to the NDVI and geographically registered to produce the images.
Geiger-Mode Avalanche Photodiode Arrays Integrated to All-Digital CMOS Circuits.
Aull, Brian
2016-04-08
This article reviews MIT Lincoln Laboratory's work over the past 20 years to develop photon-sensitive image sensors based on arrays of silicon Geiger-mode avalanche photodiodes. Integration of these detectors to all-digital CMOS readout circuits enable exquisitely sensitive solid-state imagers for lidar, wavefront sensing, and passive imaging.
NASA Technical Reports Server (NTRS)
1998-01-01
PixelVision, Inc., has developed a series of integrated imaging engines capable of high-resolution image capture at dynamic speeds. This technology was used originally at Jet Propulsion Laboratory in a series of imaging engines for a NASA mission to Pluto. By producing this integrated package, Charge-Coupled Device (CCD) technology has been made accessible to a wide range of users.
Improved integral images compression based on multi-view extraction
NASA Astrophysics Data System (ADS)
Dricot, Antoine; Jung, Joel; Cagnazzo, Marco; Pesquet, Béatrice; Dufaux, Frédéric
2016-09-01
Integral imaging is a technology based on plenoptic photography that captures and samples the light-field of a scene through a micro-lens array. It provides views of the scene from several angles and therefore is foreseen as a key technology for future immersive video applications. However, integral images have a large resolution and a structure based on micro-images which is challenging to encode. A compression scheme for integral images based on view extraction has previously been proposed, with average BD-rate gains of 15.7% (up to 31.3%) reported over HEVC when using one single extracted view. As the efficiency of the scheme depends on a tradeoff between the bitrate required to encode the view and the quality of the image reconstructed from the view, it is proposed to increase the number of extracted views. Several configurations are tested with different positions and different number of extracted views. Compression efficiency is increased with average BD-rate gains of 22.2% (up to 31.1%) reported over the HEVC anchor, with a realistic runtime increase.
NASA Astrophysics Data System (ADS)
Wei, Liqing; Xiao, Xizhong; Wang, Yueming; Zhuang, Xiaoqiong; Wang, Jianyu
2017-11-01
Space-borne hyperspectral imagery is an important tool for earth sciences and industrial applications. Higher spatial and spectral resolutions have been sought persistently, although this results in more power, larger volume and weight during a space-borne spectral imager design. For miniaturization of hyperspectral imager and optimization of spectral splitting methods, several methods are compared in this paper. Spectral time delay integration (TDI) method with high transmittance Integrated Stepwise Filter (ISF) is proposed.With the method, an ISF imaging spectrometer with TDI could achieve higher system sensitivity than the traditional prism/grating imaging spectrometer. In addition, the ISF imaging spectrometer performs well in suppressing infrared background radiation produced by instrument. A compact shortwave infrared (SWIR) hyperspectral imager prototype based on HgCdTe covering the spectral range of 2.0-2.5 μm with 6 TDI stages was designed and integrated. To investigate the performance of ISF spectrometer, a method to derive the optimal blocking band curve of the ISF is introduced, along with known error characteristics. To assess spectral performance of the ISF system, a new spectral calibration based on blackbody radiation with temperature scanning is proposed. The results of the imaging experiment showed the merits of ISF. ISF has great application prospects in the field of high sensitivity and high resolution space-borne hyperspectral imagery.
Integration of CBIR in radiological routine in accordance with IHE
NASA Astrophysics Data System (ADS)
Welter, Petra; Deserno, Thomas M.; Fischer, Benedikt; Wein, Berthold B.; Ott, Bastian; Günther, Rolf W.
2009-02-01
Increasing use of digital imaging processing leads to an enormous amount of imaging data. The access to picture archiving and communication systems (PACS), however, is solely textually, leading to sparse retrieval results because of ambiguous or missing image descriptions. Content-based image retrieval (CBIR) systems can improve the clinical diagnostic outcome significantly. However, current CBIR systems are not able to integrate their results with clinical workflow and PACS. Existing communication standards like DICOM and HL7 leave many options for implementation and do not ensure full interoperability. We present a concept of the standardized integration of a CBIR system for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. This is based on the IHE integration profile 'Post-Processing Workflow' (PPW) defining responsibilities as well as standardized communication and utilizing the DICOM Structured Report (DICOM SR). Because nowadays most of PACS and RIS systems are not yet fully IHE compliant to PPW, we also suggest an intermediate approach with the concepts of the CAD-PACS Toolkit. The integration is independent of the particular PACS and RIS system. Therefore, it supports the widespread application of CBIR in radiological routine. As a result, the approach is exemplarily applied to the Image Retrieval in Medical Applications (IRMA) framework.
Moutsatsos, Ioannis K; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J; Jenkins, Jeremy L; Holway, Nicholas; Tallarico, John; Parker, Christian N
2017-03-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an "off-the-shelf," open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community.
Moutsatsos, Ioannis K.; Hossain, Imtiaz; Agarinis, Claudia; Harbinski, Fred; Abraham, Yann; Dobler, Luc; Zhang, Xian; Wilson, Christopher J.; Jenkins, Jeremy L.; Holway, Nicholas; Tallarico, John; Parker, Christian N.
2016-01-01
High-throughput screening generates large volumes of heterogeneous data that require a diverse set of computational tools for management, processing, and analysis. Building integrated, scalable, and robust computational workflows for such applications is challenging but highly valuable. Scientific data integration and pipelining facilitate standardized data processing, collaboration, and reuse of best practices. We describe how Jenkins-CI, an “off-the-shelf,” open-source, continuous integration system, is used to build pipelines for processing images and associated data from high-content screening (HCS). Jenkins-CI provides numerous plugins for standard compute tasks, and its design allows the quick integration of external scientific applications. Using Jenkins-CI, we integrated CellProfiler, an open-source image-processing platform, with various HCS utilities and a high-performance Linux cluster. The platform is web-accessible, facilitates access and sharing of high-performance compute resources, and automates previously cumbersome data and image-processing tasks. Imaging pipelines developed using the desktop CellProfiler client can be managed and shared through a centralized Jenkins-CI repository. Pipelines and managed data are annotated to facilitate collaboration and reuse. Limitations with Jenkins-CI (primarily around the user interface) were addressed through the selection of helper plugins from the Jenkins-CI community. PMID:27899692
2017-12-02
Report: Acquisition of an Advanced Thermal Analysis and Imaging System for Integration with Interdisciplinary Research and Education in Low Density...for Integration with Interdisciplinary Research and Education in Low Density Organic-Inorganic Materials Report Term: 0-Other Email: dmisra2
The UIST image slicing integral field unit
NASA Astrophysics Data System (ADS)
Ramsay Howat, S.; Todd, S.; Wells, M.; Hastings, P.
2006-06-01
The UKIRT Imager Spectrometer (UIST) contains a deployable integral field unit which is one of the most popular modes of this common-user instrument. In this paper, we review all aspects of the UIST IFU from the design and production of the aluminium mirrors to the integration with the telescope system during commissioning. Reduction of the integral field data is fully supported by the UKIRT data reduction pipeline, ORAC-DR.
Compressive Sensing Based Bio-Inspired Shape Feature Detection CMOS Imager
NASA Technical Reports Server (NTRS)
Duong, Tuan A. (Inventor)
2015-01-01
A CMOS imager integrated circuit using compressive sensing and bio-inspired detection is presented which integrates novel functions and algorithms within a novel hardware architecture enabling efficient on-chip implementation.
System integration and DICOM image creation for PET-MR fusion.
Hsiao, Chia-Hung; Kao, Tsair; Fang, Yu-Hua; Wang, Jiunn-Kuen; Guo, Wan-Yuo; Chao, Liang-Hsiao; Yen, Sang-Hue
2005-03-01
This article demonstrates a gateway system for converting image fusion results to digital imaging and communication in medicine (DICOM) objects. For the purpose of standardization and integration, we have followed the guidelines of the Integrated Healthcare Enterprise technical framework and developed a DICOM gateway. The gateway system combines data from hospital information system, image fusion results, and the information generated itself to constitute new DICOM objects. All the mandatory tags defined in standard DICOM object were generated in the gateway system. The gateway system will generate two series of SOP instances of each PET-MR fusion result; SOP (Service Object Pair) one for the reconstructed magnetic resonance (MR) images and the other for position emission tomography (PET) images. The size, resolution, spatial coordinates, and number of frames are the same in both series of SOP instances. Every new generated MR image exactly fits with one of the reconstructed PET images. Those DICOM images are stored to the picture archiving and communication system (PACS) server by means of standard DICOM protocols. When those images are retrieved and viewed by standard DICOM viewing systems, both images can be viewed at the same anatomy location. This system is useful for precise diagnosis and therapy.
A proposal of image slicer designed for integral field spectroscopy with NIRSpec/JSWT
NASA Astrophysics Data System (ADS)
Prieto, E.; Vivès, S.
2006-06-01
Integral field spectroscopy (IFS) provides a spectrum simultaneously for each spatial sample of an extended, two-dimensional field. It consists of an integral field unit (IFU) which slices and re-arranges the initial field along the entrance slit of a spectrograph. This article presents a deviation of the classical design of IFU based on the advanced image slicer concept [Content, R., 1997. A new design for integral field spectroscopy with 8-m telescopes. Proc. SPIE 2871, 1295]. To reduce optical aberrations, pupil and slit mirrors are disposed in a fan-shaped configuration that means that angles between incident and reflected beams on each elements are minimized. The fan-shaped image slicer is explained more in details in [Vivès, S., Prieto, E. submitted for publication. An original image slicer designed for Integral Field Spectroscopy with NIRSpec/JSWT. Opt Eng. Available from: ArXiv Physics e-prints, arXiv:0512002.] As an example, we are presenting the design LAM used for its proposal at the NIRSPEC/IFU invitation of tender.
A cryptologic based trust center for medical images.
Wong, S T
1996-01-01
To investigate practical solutions that can integrate cryptographic techniques and picture archiving and communication systems (PACS) to improve the security of medical images. The PACS at the University of California San Francisco Medical Center consolidate images and associated data from various scanners into a centralized data archive and transmit them to remote display stations for review and consultation purposes. The purpose of this study is to investigate the model of a digital trust center that integrates cryptographic algorithms and protocols seamlessly into such a digital radiology environment to improve the security of medical images. The timing performance of encryption, decryption, and transmission of the cryptographic protocols over 81 volumetric PACS datasets has been measured. Lossless data compression is also applied before the encryption. The transmission performance is measured against three types of networks of different bandwidths: narrow-band Integrated Services Digital Network, Ethernet, and OC-3c Asynchronous Transfer Mode. The proposed digital trust center provides a cryptosystem solution to protect the confidentiality and to determine the authenticity of digital images in hospitals. The results of this study indicate that diagnostic images such as x-rays and magnetic resonance images could be routinely encrypted in PACS. However, applying encryption in teleradiology and PACS is a tradeoff between communications performance and security measures. Many people are uncertain about how to integrate cryptographic algorithms coherently into existing operations of the clinical enterprise. This paper describes a centralized cryptosystem architecture to ensure image data authenticity in a digital radiology department. The system performance has been evaluated in a hospital-integrated PACS environment.
A cryptologic based trust center for medical images.
Wong, S T
1996-01-01
OBJECTIVE: To investigate practical solutions that can integrate cryptographic techniques and picture archiving and communication systems (PACS) to improve the security of medical images. DESIGN: The PACS at the University of California San Francisco Medical Center consolidate images and associated data from various scanners into a centralized data archive and transmit them to remote display stations for review and consultation purposes. The purpose of this study is to investigate the model of a digital trust center that integrates cryptographic algorithms and protocols seamlessly into such a digital radiology environment to improve the security of medical images. MEASUREMENTS: The timing performance of encryption, decryption, and transmission of the cryptographic protocols over 81 volumetric PACS datasets has been measured. Lossless data compression is also applied before the encryption. The transmission performance is measured against three types of networks of different bandwidths: narrow-band Integrated Services Digital Network, Ethernet, and OC-3c Asynchronous Transfer Mode. RESULTS: The proposed digital trust center provides a cryptosystem solution to protect the confidentiality and to determine the authenticity of digital images in hospitals. The results of this study indicate that diagnostic images such as x-rays and magnetic resonance images could be routinely encrypted in PACS. However, applying encryption in teleradiology and PACS is a tradeoff between communications performance and security measures. CONCLUSION: Many people are uncertain about how to integrate cryptographic algorithms coherently into existing operations of the clinical enterprise. This paper describes a centralized cryptosystem architecture to ensure image data authenticity in a digital radiology department. The system performance has been evaluated in a hospital-integrated PACS environment. PMID:8930857
VHDL Modeling and Simulation of a Digital Image Synthesizer for Countering ISAR
2003-06-01
This thesis discusses VHDL modeling and simulation of a full custom Application Specific Integrated Circuit (ASIC) for a Digital Image Synthesizer...necessary for a given application . With such a digital method, it is possible for a small ship to appear as large as an aircraft carrier or any high...INTRODUCTION TO DIGITAL IMAGE SYNTHESIZER (DIS) A. BACKGROUND The Digital Image Synthesizer (DIS) is an Application Specific Integrated Circuit
A Mobile Food Record For Integrated Dietary Assessment*
Ahmad, Ziad; Kerr, Deborah A.; Bosch, Marc; Boushey, Carol J.; Delp, Edward J.; Khanna, Nitin; Zhu, Fengqing
2017-01-01
This paper presents an integrated dietary assessment system based on food image analysis that uses mobile devices or smartphones. We describe two components of our integrated system: a mobile application and an image-based food nutrient database that is connected to the mobile application. An easy-to-use mobile application user interface is described that was designed based on user preferences as well as the requirements of the image analysis methods. The user interface is validated by user feedback collected from several studies. Food nutrient and image databases are also described which facilitates image-based dietary assessment and enable dietitians and other healthcare professionals to monitor patients dietary intake in real-time. The system has been tested and validated in several user studies involving more than 500 users who took more than 60,000 food images under controlled and community-dwelling conditions. PMID:28691119
Integration of USB and firewire cameras in machine vision applications
NASA Astrophysics Data System (ADS)
Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard
1999-08-01
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
Integrated infrared and visible image sensors
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2000-01-01
Semiconductor imaging devices integrating an array of visible detectors and another array of infrared detectors into a single module to simultaneously detect both the visible and infrared radiation of an input image. The visible detectors and the infrared detectors may be formed either on two separate substrates or on the same substrate by interleaving visible and infrared detectors.
P.E. Dennison; D.A. Roberts; J. Regelbrugge; S.L. Ustin
2000-01-01
Polarimetric synthetic aperture radar (SAR) and imaging spectrometry exemplify advanced technologies for mapping wildland fuels in chaparral ecosystems. In this study, we explore the potential of integrating polarimetric SAR and imaging spectrometry for mapping wildland fuels. P-band SAR and ratios containing P-band polarizations are sensitive to variations in stand...
Cerebral White Matter Integrity and Cognitive Aging: Contributions from Diffusion Tensor Imaging
Madden, David J.; Bennett, Ilana J.; Song, Allen W.
2009-01-01
The integrity of cerebral white matter is critical for efficient cognitive functioning, but little is known regarding the role of white matter integrity in age-related differences in cognition. Diffusion tensor imaging (DTI) measures the directional displacement of molecular water and as a result can characterize the properties of white matter that combine to restrict diffusivity in a spatially coherent manner. This review considers DTI studies of aging and their implications for understanding adult age differences in cognitive performance. Decline in white matter integrity contributes to a disconnection among distributed neural systems, with a consistent effect on perceptual speed and executive functioning. The relation between white matter integrity and cognition varies across brain regions, with some evidence suggesting that age-related effects exhibit an anterior-posterior gradient. With continued improvements in spatial resolution and integration with functional brain imaging, DTI holds considerable promise, both for theories of cognitive aging and for translational application. PMID:19705281
Dayhoff, R E; Maloney, D L; Kenney, T J; Fletcher, R D
1991-01-01
The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System.
Dayhoff, R. E.; Maloney, D. L.; Kenney, T. J.; Fletcher, R. D.
1991-01-01
The VA's hospital information system, the Decentralized Hospital Computer Program (DHCP), is an integrated system based on a powerful set of software tools with shared data accessible from any of its application modules. It includes many functionally specific application subsystems such as laboratory, pharmacy, radiology, and dietetics. Physicians need applications that cross these application boundaries to provide useful and convenient patient data. One of these multi-specialty applications, the DHCP Imaging System, integrates multimedia data to provide clinicians with comprehensive patient-oriented information. User requirements for cross-disciplinary image access can be studied to define needs for similar text data access. Integration approaches must be evaluated both for their ability to deliver patient-oriented text data rapidly and their ability to integrate multimedia data objects. Several potential integration approaches are described as they relate to the DHCP Imaging System. PMID:1807651
Integrated clinical workstations for image and text data capture, display, and teleconsultation.
Dayhoff, R.; Kuzmak, P. M.; Kirin, G.
1994-01-01
The Department of Veterans Affairs (VA) DHCP Imaging System digitally records clinically significant diagnostic images selected by medical specialists in a variety of hospital departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images, which include true color and gray scale images, scanned documents, and electrocardiogram waveforms, are stored on network file servers and displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system (HIS), allowing integrated displays of text and image data from all medical specialties. Two VA medical centers currently have DHCP Imaging Systems installed, and other installations are underway. PMID:7949899
PISCES: An Integral Field Spectrograph Technology Demonstration for the WFIRST Coronagraph
NASA Technical Reports Server (NTRS)
McElwain, Michael W.; Mandell, Avi M.; Gong, Qian; Llop-Sayson, Jorge; Brandt, Timothy; Chambers, Victor J.; Grammer, Bryan; Greeley, Bradford; Hilton, George; Perrin, Marshall D.;
2016-01-01
We present the design, integration, and test of the Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) integral field spectrograph (IFS). The PISCES design meets the science requirements for the Wide-Field Infra Red Survey Telescope (WFIRST) Coronagraph Instrument (CGI). PISCES was integrated and tested in the integral field spectroscopy laboratory at NASA Goddard. In June 2016, PISCES was delivered to the Jet Propulsion Laboratory (JPL) where it was integrated with the Shaped Pupil Coronagraph (SPC) High Contrast Imaging Testbed (HCIT). The SPC/PISCES configuration will demonstrate high contrast integral field spectroscopy as part of the WFIRST CGI technology development program.
PISCES: an integral field spectrograph technology demonstration for the WFIRST coronagraph
NASA Astrophysics Data System (ADS)
McElwain, Michael W.; Mandell, Avi M.; Gong, Qian; Llop-Sayson, Jorge; Brandt, Timothy; Chambers, Victor J.; Grammer, Bryan; Greeley, Bradford; Hilton, George; Perrin, Marshall D.; Stapelfeldt, Karl R.; Demers, Richard; Tang, Hong; Cady, Eric
2016-07-01
We present the design, integration, and test of the Prototype Imaging Spectrograph for Coronagraphic Exoplanet Studies (PISCES) integral field spectrograph (IFS). The PISCES design meets the science requirements for the Wide-Field InfraRed Survey Telescope (WFIRST) Coronagraph Instrument (CGI). PISCES was integrated and tested in the integral field spectroscopy laboratory at NASA Goddard. In June 2016, PISCES was delivered to the Jet Propulsion Laboratory (JPL) where it was integrated with the Shaped Pupil Coronagraph (SPC) High Contrast Imaging Testbed (HCIT). The SPC/PISCES configuration will demonstrate high contrast integral field spectroscopy as part of the WFIRST CGI technology development program.
Gabr, Hesham; Chen, Xi; Zevallos-Carrasco, Oscar M; Viehland, Christian; Dandrige, Alexandria; Sarin, Neeru; Mahmoud, Tamer H; Vajzovic, Lejla; Izatt, Joseph A; Toth, Cynthia A
2018-01-10
To evaluate the use of live volumetric (4D) intraoperative swept-source microscope-integrated optical coherence tomography in vitrectomy for proliferative diabetic retinopathy complications. In this prospective study, we analyzed a subgroup of patients with proliferative diabetic retinopathy complications who required vitrectomy and who were imaged by the research swept-source microscope-integrated optical coherence tomography system. In near real time, images were displayed in stereo heads-up display facilitating intraoperative surgeon feedback. Postoperative review included scoring image quality, identifying different diabetic retinopathy-associated pathologies and reviewing the intraoperatively documented surgeon feedback. Twenty eyes were included. Indications for vitrectomy were tractional retinal detachment (16 eyes), combined tractional-rhegmatogenous retinal detachment (2 eyes), and vitreous hemorrhage (2 eyes). Useful, good-quality 2D (B-scans) and 4D images were obtained in 16/20 eyes (80%). In these eyes, multiple diabetic retinopathy complications could be imaged. Swept-source microscope-integrated optical coherence tomography provided surgical guidance, e.g., in identifying dissection planes under fibrovascular membranes, and in determining residual membranes and traction that would benefit from additional peeling. In 4/20 eyes (20%), acceptable images were captured, but they were not useful due to high tractional retinal detachment elevation which was challenging for imaging. Swept-source microscope-integrated optical coherence tomography can provide important guidance during surgery for proliferative diabetic retinopathy complications through intraoperative identification of different complications and facilitation of intraoperative decision making.
Mulkey, Sarah B; Yap, Vivien L; Bai, Shasha; Ramakrishnaiah, Raghu H; Glasier, Charles M; Bornemeier, Renee A; Schmitz, Michael L; Bhutta, Adnan T
2015-06-01
The study aims are to evaluate cerebral background patterns using amplitude-integrated electroencephalography in newborns with critical congenital heart disease, determine if amplitude-integrated electroencephalography is predictive of preoperative brain injury, and assess the incidence of preoperative seizures. We hypothesize that amplitude-integrated electroencephalography will show abnormal background patterns in the early preoperative period in infants with congenital heart disease that have preoperative brain injury on magnetic resonance imaging. Twenty-four newborns with congenital heart disease requiring surgery at younger than 30 days of age were prospectively enrolled within the first 3 days of age at a tertiary care pediatric hospital. Infants had amplitude-integrated electroencephalography for 24 hours beginning close to birth and preoperative brain magnetic resonance imaging. The amplitude-integrated electroencephalographies were read to determine if the background pattern was normal, mildly abnormal, or severely abnormal. The presence of seizures and sleep-wake cycling were noted. The preoperative brain magnetic resonance imaging scans were used for brain injury and brain atrophy assessment. Fifteen of 24 infants had abnormal amplitude-integrated electroencephalography at 0.71 (0-2) (mean [range]) days of age. In five infants, the background pattern was severely abnormal. (burst suppression and/or continuous low voltage). Of the 15 infants with abnormal amplitude-integrated electroencephalography, 9 (60%) had brain injury. One infant with brain injury had a seizure on amplitude-integrated electroencephalography. A severely abnormal background pattern on amplitude-integrated electroencephalography was associated with brain atrophy (P = 0.03) and absent sleep-wake cycling (P = 0.022). Background cerebral activity is abnormal on amplitude-integrated electroencephalography following birth in newborns with congenital heart disease who have findings of brain injury and/or brain atrophy on preoperative brain magnetic resonance imaging. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Henri, Christopher J.; Pike, Gordon; Collins, D. Louis; Peters, Terence M.
1990-07-01
We present two methods for acquiring and viewing integrated 3-D images of cerebral vasculature and cortical anatomy. The aim of each technique is to provide the neurosurgeon or radiologist with a 3-D image containing information which cannot ordinarily be obtained from a single imaging modality. The first approach employs recent developments in MR which is now capable of imaging flowing blood as well as static tissue. Here, true 3-D data are acquired and displayed using volume or surface rendering techniques. The second approach is based on the integration of x-ray projection angiograms and tomographic image data, allowing a composite image of anatomy and vasculature to be viewed in 3-D. This is accomplished by superimposing an angiographic stereo-pair onto volume rendered images of either CT or MR data created from matched viewing geometries. The two approaches are outlined and compared. Results are presented for each technique and potential clinical applications discussed.
Lu, Dengsheng; Batistella, Mateus; Moran, Emilio
2009-01-01
Traditional change detection approaches have been proven to be difficult in detecting vegetation changes in the moist tropical regions with multitemporal images. This paper explores the integration of Landsat Thematic Mapper (TM) and SPOT High Resolution Geometric (HRG) instrument data for vegetation change detection in the Brazilian Amazon. A principal component analysis was used to integrate TM and HRG panchromatic data. Vegetation change/non-change was detected with the image differencing approach based on the TM and HRG fused image and the corresponding TM image. A rule-based approach was used to classify the TM and HRG multispectral images into thematic maps with three coarse land-cover classes: forest, non-forest vegetation, and non-vegetation lands. A hybrid approach combining image differencing and post-classification comparison was used to detect vegetation change trajectories. This research indicates promising vegetation change techniques, especially for vegetation gain and loss, even if very limited reference data are available. PMID:19789721
The Image Data Resource: A Bioimage Data Integration and Publication Platform.
Williams, Eleanor; Moore, Josh; Li, Simon W; Rustici, Gabriella; Tarkowska, Aleksandra; Chessel, Anatole; Leo, Simone; Antal, Bálint; Ferguson, Richard K; Sarkans, Ugis; Brazma, Alvis; Salas, Rafael E Carazo; Swedlow, Jason R
2017-08-01
Access to primary research data is vital for the advancement of science. To extend the data types supported by community repositories, we built a prototype Image Data Resource (IDR) that collects and integrates imaging data acquired across many different imaging modalities. IDR links data from several imaging modalities, including high-content screening, super-resolution and time-lapse microscopy, digital pathology, public genetic or chemical databases, and cell and tissue phenotypes expressed using controlled ontologies. Using this integration, IDR facilitates the analysis of gene networks and reveals functional interactions that are inaccessible to individual studies. To enable re-analysis, we also established a computational resource based on Jupyter notebooks that allows remote access to the entire IDR. IDR is also an open source platform that others can use to publish their own image data. Thus IDR provides both a novel on-line resource and a software infrastructure that promotes and extends publication and re-analysis of scientific image data.
Jeong, Kyeong-Min; Kim, Hee-Seung; Hong, Sung-In; Lee, Sung-Keun; Jo, Na-Young; Kim, Yong-Soo; Lim, Hong-Gi; Park, Jae-Hyeung
2012-10-08
Speed enhancement of integral imaging based incoherent Fourier hologram capture using a graphic processing unit is reported. Integral imaging based method enables exact hologram capture of real-existing three-dimensional objects under regular incoherent illumination. In our implementation, we apply parallel computation scheme using the graphic processing unit, accelerating the processing speed. Using enhanced speed of hologram capture, we also implement a pseudo real-time hologram capture and optical reconstruction system. The overall operation speed is measured to be 1 frame per second.
Flash trajectory imaging of target 3D motion
NASA Astrophysics Data System (ADS)
Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang
2011-03-01
We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.
Integrated editing system for Japanese text and image information "Linernote"
NASA Astrophysics Data System (ADS)
Tanaka, Kazuto
Integrated Japanese text editing system "Linernote" developed by Toyo Industries Co. is explained. The system has been developed on the concept of electronic publishing. It is composed of personal computer NEC PC-9801 VX and other peripherals. Sentence, drawing and image data is inputted and edited under the integrated operating environment in the system and final text is printed out by laser printer. Handling efficiency of time consuming work such as pattern input or page make up has been improved by draft image data indication method on CRT. It is the latest DTP system equipped with three major functions, namly, typesetting for high quality text editing, easy drawing/tracing and high speed image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geng, Yijia; Xu, Shuping; Xu, Weiqing, E-mail: xuwq@jlu.edu.cn
An integrated and portable Raman analyzer featuring an inverted probe fixed on a motor-driving adjustable optical module was designed for the combination of a microfluidic system. It possesses a micro-imaging function. The inverted configuration is advantageous to locate and focus microfluidic channels. Different from commercial micro-imaging Raman spectrometers using manual switchable light path, this analyzer adopts a dichroic beam splitter for both imaging and signal collection light paths, which avoids movable parts and improves the integration and stability of optics. Combined with surface-enhanced Raman scattering technique, this portable Raman micro-analyzer is promising as a powerful tool for microfluidic analytics.
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Xu, Xiaoyin; Voss, Stephan
2017-03-01
In this paper, we describe an enhanced DICOM Secondary Capture (SC) that integrates Image Quantification (IQ) results, Regions of Interest (ROIs), and Time Activity Curves (TACs) with screen shots by embedding extra medical imaging information into a standard DICOM header. A software toolkit of DICOM IQSC has been developed to implement the SC-centered information integration of quantitative analysis for routine practice of nuclear medicine. Primary experiments show that the DICOM IQSC method is simple and easy to implement seamlessly integrating post-processing workstations with PACS for archiving and retrieving IQ information. Additional DICOM IQSC applications in routine nuclear medicine and clinic research are also discussed.
NASA Astrophysics Data System (ADS)
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L.; Kozorovitskiy, Yevgenia
2018-05-01
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
Kumar, Manish; Kishore, Sandeep; Nasenbeny, Jordan; McLean, David L; Kozorovitskiy, Yevgenia
2018-05-14
Versatile, sterically accessible imaging systems capable of in vivo rapid volumetric functional and structural imaging deep in the brain continue to be a limiting factor in neuroscience research. Towards overcoming this obstacle, we present integrated one- and two-photon scanned oblique plane illumination (SOPi, /sōpī/) microscopy which uses a single front-facing microscope objective to provide light-sheet scanning based rapid volumetric imaging capability at subcellular resolution. Our planar scan-mirror based optimized light-sheet architecture allows for non-distorted scanning of volume samples, simplifying accurate reconstruction of the imaged volume. Integration of both one-photon (1P) and two-photon (2P) light-sheet microscopy in the same system allows for easy selection between rapid volumetric imaging and higher resolution imaging in scattering media. Using SOPi, we demonstrate deep, large volume imaging capability inside scattering mouse brain sections and rapid imaging speeds up to 10 volumes per second in zebrafish larvae expressing genetically encoded fluorescent proteins GFP or GCaMP6s. SOPi's flexibility and steric access makes it adaptable for numerous imaging applications and broadly compatible with orthogonal techniques for actuating or interrogating neuronal structure and activity.
Characteristics of composite images in multiview imaging and integral photography.
Lee, Beom-Ryeol; Hwang, Jae-Jeong; Son, Jung-Young
2012-07-20
The compositions of images projected to a viewer's eyes from the various viewing regions of the viewing zone formed in one-dimensional integral photography (IP) and multiview imaging (MV) are identified. These compositions indicate that they are made up of pieces from different view images. Comparisons of the composite images with images composited at various regions of imaging space formed by camera arrays for multiview image acquisition reveal that the composite images do not involve any scene folding in the central viewing zone for either MV or IP. However, in the IP case, compositions from neighboring viewing regions aligned in the horizontal direction have reversed disparities, but in the viewing regions between the central and side viewing zones, no reversed disparities are expected. However, MV does exhibit them.
PACS-Based Computer-Aided Detection and Diagnosis
NASA Astrophysics Data System (ADS)
Huang, H. K. (Bernie); Liu, Brent J.; Le, Anh HongTu; Documet, Jorge
The ultimate goal of Picture Archiving and Communication System (PACS)-based Computer-Aided Detection and Diagnosis (CAD) is to integrate CAD results into daily clinical practice so that it becomes a second reader to aid the radiologist's diagnosis. Integration of CAD and Hospital Information System (HIS), Radiology Information System (RIS) or PACS requires certain basic ingredients from Health Level 7 (HL7) standard for textual data, Digital Imaging and Communications in Medicine (DICOM) standard for images, and Integrating the Healthcare Enterprise (IHE) workflow profiles in order to comply with the Health Insurance Portability and Accountability Act (HIPAA) requirements to be a healthcare information system. Among the DICOM standards and IHE workflow profiles, DICOM Structured Reporting (DICOM-SR); and IHE Key Image Note (KIN), Simple Image and Numeric Report (SINR) and Post-processing Work Flow (PWF) are utilized in CAD-HIS/RIS/PACS integration. These topics with examples are presented in this chapter.
Sun, LiJun; Hwang, Hyeon-Shik; Lee, Kyung-Min
2018-03-01
The purpose of this study was to examine changes in registration accuracy after including occlusal surface and incisal edge areas in addition to the buccal surface when integrating laser-scanned and maxillofacial cone-beam computed tomography (CBCT) dental images. CBCT scans and maxillary dental casts were obtained from 30 patients. Three methods were used to integrate the images: R1, only the buccal and labial surfaces were used; R2, the incisal edges of the anterior teeth and the buccal and distal marginal ridges of the second molars were used; and R3, labial surfaces, including incisal edges of anterior teeth, and buccal surfaces, including buccal and distal marginal ridges of the second molars, were used. Differences between the 2 images were evaluated by color-mapping methods and average surface distances by measuring the 3-dimensional Euclidean distances between the surface points on the 2 images. The R1 method showed more discrepancies between the laser-scanned and CBCT images than did the other methods. The R2 method did not show a significant difference in registration accuracy compared with the R3 method. The results of this study indicate that accuracy when integrating laser-scanned dental images into maxillofacial CBCT images can be increased by including occlusal surface and incisal edge areas as registration areas. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Realization of integral 3-dimensional image using fabricated tunable liquid lens array
NASA Astrophysics Data System (ADS)
Lee, Muyoung; Kim, Junoh; Kim, Cheol Joong; Lee, Jin Su; Won, Yong Hyub
2015-03-01
Electrowetting has been widely studied for various optical applications such as optical switch, sensor, prism, and display. In this study, vari-focal liquid lens array is developed using electrowetting principle to construct integral 3-dimensional imaging. The electrowetting principle that changes the surface tension by applying voltage has several advantages to realize active optical device such as fast response time, low electrical consumption, and no mechanical moving parts. Two immiscible liquids that are water and oil are used for forming lens. By applying a voltage to the water, the focal length of the lens could be tuned as changing contact angle of water. The fabricated electrowetting vari-focal liquid lens array has 1mm diameter spherical lens shape that has 1.6mm distance between each lens. The number of lenses on the panel is 23x23 and the focal length of the lens array is simultaneously tuned from -125 to 110 diopters depending on the applied voltage. The fabricated lens array is implemented to integral 3-dimensional imaging. A 3D object is reconstructed by fabricated liquid lens array with 23x23 elemental images that are generated by 3D max tools. When liquid lens array is tuned as convex state. From vari-focal liquid lens array implemented integral imaging system, we expect that depth enhanced integral imaging can be realized in the near future.
NASA Astrophysics Data System (ADS)
Peter, Jörg; Semmler, Wolfhard
2007-10-01
Alongside and in part motivated by recent advances in molecular diagnostics, the development of dual-modality instruments for patient and dedicated small animal imaging has gained attention by diverse research groups. The desire for such systems is high not only to link molecular or functional information with the anatomical structures, but also for detecting multiple molecular events simultaneously at shorter total acquisition times. While PET and SPECT have been integrated successfully with X-ray CT, the advance of optical imaging approaches (OT) and the integration thereof into existing modalities carry a high application potential, particularly for imaging small animals. A multi-modality Monte Carlo (MC) simulation approach at present has been developed that is able to trace high-energy (keV) as well as optical (eV) photons concurrently within identical phantom representation models. We show that the involved two approaches for ray-tracing keV and eV photons can be integrated into a unique simulation framework which enables both photon classes to be propagated through various geometry models representing both phantoms and scanners. The main advantage of such integrated framework for our specific application is the investigation of novel tomographic multi-modality instrumentation intended for in vivo small animal imaging through time-resolved MC simulation upon identical phantom geometries. Design examples are provided for recently proposed SPECT-OT and PET-OT imaging systems.
Opto-mechanical design of an image slicer for the GRIS spectrograph at GREGOR
NASA Astrophysics Data System (ADS)
Vega Reyes, N.; Esteves, M. A.; Sánchez-Capuchino, J.; Salaun, Y.; López, R. L.; Gracia, F.; Estrada Herrera, P.; Grivel, C.; Vaz Cedillo, J. J.; Collados, M.
2016-07-01
An image slicer has been proposed for the Integral Field Spectrograph [1] of the 4-m European Solar Telescope (EST) [2] The image slicer for EST is called MuSICa (Multi-Slit Image slicer based on collimator-Camera) [3] and it is a telecentric system with diffraction limited optical quality offering the possibility to obtain high resolution Integral Field Solar Spectroscopy or Spectro-polarimetry by coupling a polarimeter after the generated slit (or slits). Considering the technical complexity of the proposed Integral Field Unit (IFU), a prototype has been designed for the GRIS spectrograph at GREGOR telescope at Teide Observatory (Tenerife), composed by the optical elements of the image slicer itself, a scanning system (to cover a larger field of view with sequential adjacent measurements) and an appropriate re-imaging system. All these subsystems are placed in a bench, specially designed to facilitate their alignment, integration and verification, and their easy installation in front of the spectrograph. This communication describes the opto-mechanical solution adopted to upgrade GRIS while ensuring repeatability between the observational modes, IFU and long-slit. Results from several tests which have been performed to validate the opto-mechanical prototypes are also presented.
Integrated RFA/OCT catheter for real-time guidance of cardiac RFA therapy (Conference Presentation)
NASA Astrophysics Data System (ADS)
Fu, Xiaoyong; Blumenthal, Colin; Dosluoglu, Deniz; Wang, Yves T.; Jenkins, Michael W.; Souza, Rakesh; Snyder, Christopher; Arruda, Mauricio; Rollins, Andrew M.
2016-03-01
Currently, cardiac radiofrequency ablation is guided by indirect signals. We demonstrate an integrated radiofrequency ablation (RFA) and optical coherence tomography (OCT) probe for directly monitoring of the RFA procedure with OCT images in real time. The integrated RFA/OCT probe is modified from a standard commercial RFA catheter, and a newly designed and fabricated miniature forward-viewing cone-scanning OCT probe is integrated into the modified probe. The OCT system is verified with the human finger images, and the results show the integrated RFA/OCT probe can acquire high quality OCT images. The radiofrequency energy delivering function of the integrated probe is verified by comparing the RFA lesion sizes with standard commercial RFA probe. For the standard commercial probe, the average width and depth of the 10 lesions were 3.5 mm and 1.8 mm respectively. For the integrated RFA/OCT probe, the average width and depth of the 10 lesions were 3.6 mm and 1.7 mm respectively. The lesions created by the two probes are indistinguishable in size. This demonstrates that our glass window in the integrated probe has little effect on the RF energy delivery. And the integrated probe is used to monitoring the cardiac RFA procedure in real time. The results show that the RFA lesion formation can be confirmed by the loss of birefringence in the heart tissue. The system can potentially in vivo image of the cardiac wall to aid RFA therapy for cardiac arrhythmias.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors.
Dutton, Neale A W; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K
2016-07-20
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed.
Weaver, Terri L.; Griffin, Michael G.; Mitchell, Elisha R.
2014-01-01
While body image concerns and interpersonal violence exposure are significant issues for women, their interrelationship has been rarely explored. We examined the associations between severity of acute injuries, symptoms of posttraumatic stress disorder (PTSD), depression and body image distress within a sample of predominantly African-American victims of interpersonal violence (N = 73). Severity of body image distress was significantly associated with each outcome. Moreover, body image distress was a significant, unique predictor of depression but not PTSD severity. We recommend continued exploration of body image concerns to further integrated research on violence against women. PMID:24215653
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, R., E-mail: raspberry@lanl.gov; Danly, C.; Fatherley, V. E.
2015-12-15
The Neutron Imaging System (NIS) is an important diagnostic for understanding implosions of deuterium-tritium capsules at the National Ignition Facility. While the detectors for the existing system must be positioned 28 m from the source to produce sufficient imaging magnification and resolution, recent testing of a new short line of sight neutron imaging system has shown sufficient resolution to allow reconstruction of the source image with quality similar to that of the existing NIS on a 11.6 m line of sight. The new system used the existing pinhole aperture array and a stack of detectors composed of 2 mm thickmore » high-density polyethylene converter material followed by an image plate. In these detectors, neutrons enter the converter material and interact with protons, which recoil and deposit energy within the thin active layer of the image plate through ionization losses. The described system produces time-integrated images for all neutron energies passing through the pinhole. We present details of the measurement scheme for this novel technique to produce energy-integrated neutron images as well as source reconstruction results from recent experiments at NIF.« less
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.
Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics.
Combining endoscopic ultrasound with Time-Of-Flight PET: The EndoTOFPET-US Project
NASA Astrophysics Data System (ADS)
Frisch, Benjamin
2013-12-01
The EndoTOFPET-US collaboration develops a multimodal imaging technique for endoscopic exams of the pancreas or the prostate. It combines the benefits of high resolution metabolic imaging with Time-Of-Flight Positron Emission Tomography (TOF PET) and anatomical imaging with ultrasound (US). EndoTOFPET-US consists of a PET head extension for a commercial US endoscope and a PET plate outside the body in coincidence with the head. The high level of miniaturization and integration creates challenges in fields such as scintillating crystals, ultra-fast photo-detection, highly integrated electronics, system integration and image reconstruction. Amongst the developments, fast scintillators as well as fast and compact digital SiPMs with single SPAD readout are used to obtain the best coincidence time resolution (CTR). Highly integrated ASICs and DAQ electronics contribute to the timing performances of EndoTOFPET. In view of the targeted resolution of around 1 mm in the reconstructed image, we present a prototype detector system with a CTR better than 240 ps FWHM. We discuss the challenges in simulating such a system and introduce reconstruction algorithms based on graphics processing units (GPU).
Mining and integration of pathway diagrams from imaging data.
Kozhenkov, Sergey; Baitaluk, Michael
2012-03-01
Pathway diagrams from PubMed and World Wide Web (WWW) contain valuable highly curated information difficult to reach without tools specifically designed and customized for the biological semantics and high-content density of the images. There is currently no search engine or tool that can analyze pathway images, extract their pathway components (molecules, genes, proteins, organelles, cells, organs, etc.) and indicate their relationships. Here, we describe a resource of pathway diagrams retrieved from article and web-page images through optical character recognition, in conjunction with data mining and data integration methods. The recognized pathways are integrated into the BiologicalNetworks research environment linking them to a wealth of data available in the BiologicalNetworks' knowledgebase, which integrates data from >100 public data sources and the biomedical literature. Multiple search and analytical tools are available that allow the recognized cellular pathways, molecular networks and cell/tissue/organ diagrams to be studied in the context of integrated knowledge, experimental data and the literature. BiologicalNetworks software and the pathway repository are freely available at www.biologicalnetworks.org. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Scott, Richard; Khan, Faisal M.; Zeineh, Jack; Donovan, Michael; Fernandez, Gerardo
2015-03-01
Immunofluorescent (IF) image analysis of tissue pathology has proven to be extremely valuable and robust in developing prognostic assessments of disease, particularly in prostate cancer. There have been significant advances in the literature in quantitative biomarker expression as well as characterization of glandular architectures in discrete gland rings. However, while biomarker and glandular morphometric features have been combined as separate predictors in multivariate models, there is a lack of integrative features for biomarkers co-localized within specific morphological sub-types; for example the evaluation of androgen receptor (AR) expression within Gleason 3 glands only. In this work we propose a novel framework employing multiple techniques to generate integrated metrics of morphology and biomarker expression. We demonstrate the utility of the approaches in predicting clinical disease progression in images from 326 prostate biopsies and 373 prostatectomies. Our proposed integrative approaches yield significant improvements over existing IF image feature metrics. This work presents some of the first algorithms for generating innovative characteristics in tissue diagnostics that integrate co-localized morphometry and protein biomarker expression.
Phase contrast STEM for thin samples: Integrated differential phase contrast.
Lazić, Ivan; Bosch, Eric G T; Lazar, Sorin
2016-01-01
It has been known since the 1970s that the movement of the center of mass (COM) of a convergent beam electron diffraction (CBED) pattern is linearly related to the (projected) electrical field in the sample. We re-derive a contrast transfer function (CTF) for a scanning transmission electron microscopy (STEM) imaging technique based on this movement from the point of view of image formation and continue by performing a two-dimensional integration on the two images based on the two components of the COM movement. The resulting integrated COM (iCOM) STEM technique yields a scalar image that is linear in the phase shift caused by the sample and therefore also in the local (projected) electrostatic potential field of a thin sample. We confirm that the differential phase contrast (DPC) STEM technique using a segmented detector with 4 quadrants (4Q) yields a good approximation for the COM movement. Performing a two-dimensional integration, just as for the COM, we obtain an integrated DPC (iDPC) image which is approximately linear in the phase of the sample. Beside deriving the CTFs of iCOM and iDPC, we clearly point out the objects of the two corresponding imaging techniques, and highlight the differences to objects corresponding to COM-, DPC-, and (HA) ADF-STEM. The theory is validated with simulations and we present first experimental results of the iDPC-STEM technique showing its capability for imaging both light and heavy elements with atomic resolution and a good signal to noise ratio (SNR). Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Haak, Daniel; Doma, Aliaa; Gombert, Alexander; Deserno, Thomas M.
2016-03-01
Today, subject's medical data in controlled clinical trials is captured digitally in electronic case report forms (eCRFs). However, eCRFs only insufficiently support integration of subject's image data, although medical imaging is looming large in studies today. For bed-side image integration, we present a mobile application (App) that utilizes the smartphone-integrated camera. To ensure high image quality with this inexpensive consumer hardware, color reference cards are placed in the camera's field of view next to the lesion. The cards are used for automatic calibration of geometry, color, and contrast. In addition, a personalized code is read from the cards that allows subject identification. For data integration, the App is connected to an communication and image analysis server that also holds the code-study-subject relation. In a second system interconnection, web services are used to connect the smartphone with OpenClinica, an open-source, Food and Drug Administration (FDA)-approved electronic data capture (EDC) system in clinical trials. Once the photographs have been securely stored on the server, they are released automatically from the mobile device. The workflow of the system is demonstrated by an ongoing clinical trial, in which photographic documentation is frequently performed to measure the effect of wound incision management systems. All 205 images, which have been collected in the study so far, have been correctly identified and successfully integrated into the corresponding subject's eCRF. Using this system, manual steps for the study personnel are reduced, and, therefore, errors, latency and costs decreased. Our approach also increases data security and privacy.
Suenaga, Hideyuki; Hoang Tran, Huy; Liao, Hongen; Masamune, Ken; Dohi, Takeyoshi; Hoshi, Kazuto; Mori, Yoshiyuki; Takato, Tsuyoshi
2013-01-01
To evaluate the feasibility and accuracy of a three-dimensional augmented reality system incorporating integral videography for imaging oral and maxillofacial regions, based on preoperative computed tomography data. Three-dimensional surface models of the jawbones, based on the computed tomography data, were used to create the integral videography images of a subject's maxillofacial area. The three-dimensional augmented reality system (integral videography display, computed tomography, a position tracker and a computer) was used to generate a three-dimensional overlay that was projected on the surgical site via a half-silvered mirror. Thereafter, a feasibility study was performed on a volunteer. The accuracy of this system was verified on a solid model while simulating bone resection. Positional registration was attained by identifying and tracking the patient/surgical instrument's position. Thus, integral videography images of jawbones, teeth and the surgical tool were superimposed in the correct position. Stereoscopic images viewed from various angles were accurately displayed. Change in the viewing angle did not negatively affect the surgeon's ability to simultaneously observe the three-dimensional images and the patient, without special glasses. The difference in three-dimensional position of each measuring point on the solid model and augmented reality navigation was almost negligible (<1 mm); this indicates that the system was highly accurate. This augmented reality system was highly accurate and effective for surgical navigation and for overlaying a three-dimensional computed tomography image on a patient's surgical area, enabling the surgeon to understand the positional relationship between the preoperative image and the actual surgical site, with the naked eye. PMID:23703710
Integrating medical imaging analyses through a high-throughput bundled resource imaging system
NASA Astrophysics Data System (ADS)
Covington, Kelsie; Welch, E. Brian; Jeong, Ha-Kyu; Landman, Bennett A.
2011-03-01
Exploitation of advanced, PACS-centric image analysis and interpretation pipelines provides well-developed storage, retrieval, and archival capabilities along with state-of-the-art data providence, visualization, and clinical collaboration technologies. However, pursuit of integrated medical imaging analysis through a PACS environment can be limiting in terms of the overhead required to validate, evaluate and integrate emerging research technologies. Herein, we address this challenge through presentation of a high-throughput bundled resource imaging system (HUBRIS) as an extension to the Philips Research Imaging Development Environment (PRIDE). HUBRIS enables PACS-connected medical imaging equipment to invoke tools provided by the Java Imaging Science Toolkit (JIST) so that a medical imaging platform (e.g., a magnetic resonance imaging scanner) can pass images and parameters to a server, which communicates with a grid computing facility to invoke the selected algorithms. Generated images are passed back to the server and subsequently to the imaging platform from which the images can be sent to a PACS. JIST makes use of an open application program interface layer so that research technologies can be implemented in any language capable of communicating through a system shell environment (e.g., Matlab, Java, C/C++, Perl, LISP, etc.). As demonstrated in this proof-of-concept approach, HUBRIS enables evaluation and analysis of emerging technologies within well-developed PACS systems with minimal adaptation of research software, which simplifies evaluation of new technologies in clinical research and provides a more convenient use of PACS technology by imaging scientists.
Integrated sensor with frame memory and programmable resolution for light adaptive imaging
NASA Technical Reports Server (NTRS)
Zhou, Zhimin (Inventor); Fossum, Eric R. (Inventor); Pain, Bedabrata (Inventor)
2004-01-01
An image sensor operable to vary the output spatial resolution according to a received light level while maintaining a desired signal-to-noise ratio. Signals from neighboring pixels in a pixel patch with an adjustable size are added to increase both the image brightness and signal-to-noise ratio. One embodiment comprises a sensor array for receiving input signals, a frame memory array for temporarily storing a full frame, and an array of self-calibration column integrators for uniform column-parallel signal summation. The column integrators are capable of substantially canceling fixed pattern noise.
Partially-overlapped viewing zone based integral imaging system with super wide viewing angle.
Xiong, Zhao-Long; Wang, Qiong-Hua; Li, Shu-Li; Deng, Huan; Ji, Chao-Chao
2014-09-22
In this paper, we analyze the relationship between viewer and viewing zones of integral imaging (II) system and present a partially-overlapped viewing zone (POVZ) based integral imaging system with a super wide viewing angle. In the proposed system, the viewing angle can be wider than the viewing angle of the conventional tracking based II system. In addition, the POVZ can eliminate the flipping and time delay of the 3D scene as well. The proposed II system has a super wide viewing angle of 120° without flipping effect about twice as wide as the conventional one.
NASA Astrophysics Data System (ADS)
Issaei, Ali; Szczygiel, Lukasz; Hossein-Javaheri, Nima; Young, Mei; Molday, L. L.; Molday, R. S.; Sarunic, M. V.
2011-03-01
Scanning Laser Ophthalmoscopy (SLO) and Coherence Tomography (OCT) are complimentary retinal imaging modalities. Integration of SLO and OCT allows for both fluorescent detection and depth- resolved structural imaging of the retinal cell layers to be performed in-vivo. System customization is required to image rodents used in medical research by vision scientists. We are investigating multimodal SLO/OCT imaging of a rodent model of Stargardt's Macular Dystrophy which is characterized by retinal degeneration and accumulation of toxic autofluorescent lipofuscin deposits. Our new findings demonstrate the ability to track fundus autofluorescence and retinal degeneration concurrently.
Hybrid imaging: a quantum leap in scientific imaging
NASA Astrophysics Data System (ADS)
Atlas, Gene; Wadsworth, Mark V.
2004-01-01
ImagerLabs has advanced its patented next generation imaging technology called the Hybrid Imaging Technology (HIT) that offers scientific quality performance. The key to the HIT is the merging of the CCD and CMOS technologies through hybridization rather than process integration. HIT offers exceptional QE, fill factor, broad spectral response and very low noise properties of the CCD. In addition, it provides the very high-speed readout, low power, high linearity and high integration capability of CMOS sensors. In this work, we present the benefits, and update the latest advances in the performance of this exciting technology.
Multifacet structure of observed reconstructed integral images.
Martínez-Corral, Manuel; Javidi, Bahram; Martínez-Cuenca, Raúl; Saavedra, Genaro
2005-04-01
Three-dimensional images generated by an integral imaging system suffer from degradations in the form of grid of multiple facets. This multifacet structure breaks the continuity of the observed image and therefore reduces its visual quality. We perform an analysis of this effect and present the guidelines in the design of lenslet imaging parameters for optimization of viewing conditions with respect to the multifacet degradation. We consider the optimization of the system in terms of field of view, observer position and pupil function, lenslet parameters, and type of reconstruction. Numerical tests are presented to verify the theoretical analysis.
NASA Astrophysics Data System (ADS)
Balbin, Jessie R.; Dela Cruz, Jennifer C.; Camba, Clarisse O.; Gozo, Angelo D.; Jimenez, Sheena Mariz B.; Tribiana, Aivje C.
2017-06-01
Acne vulgaris, commonly called as acne, is a skin problem that occurs when oil and dead skin cells clog up in a person's pores. This is because hormones change which makes the skin oilier. The problem is people really do not know the real assessment of sensitivity of their skin in terms of fluid development on their faces that tends to develop acne vulgaris, thus having more complications. This research aims to assess Acne Vulgaris using luminescent visualization system through optical imaging and integration of image processing algorithms. Specifically, this research aims to design a prototype for facial fluid analysis using luminescent visualization system through optical imaging and integration of fluorescent imaging system, and to classify different facial fluids present in each person. Throughout the process, some structures and layers of the face will be excluded, leaving only a mapped facial structure with acne regions. Facial fluid regions are distinguished from the acne region as they are characterized differently.
Fusion of spectral and panchromatic images using false color mapping and wavelet integrated approach
NASA Astrophysics Data System (ADS)
Zhao, Yongqiang; Pan, Quan; Zhang, Hongcai
2006-01-01
With the development of sensory technology, new image sensors have been introduced that provide a greater range of information to users. But as the power limitation of radiation, there will always be some trade-off between spatial and spectral resolution in the image captured by specific sensors. Images with high spatial resolution can locate objects with high accuracy, whereas images with high spectral resolution can be used to identify the materials. Many applications in remote sensing require fusing low-resolution imaging spectral images with panchromatic images to identify materials at high resolution in clutter. A pixel-based false color mapping and wavelet transform integrated fusion algorithm is presented in this paper, the resulting images have a higher information content than each of the original images and retain sensor-specific image information. The simulation results show that this algorithm can enhance the visibility of certain details and preserve the difference of different materials.
Portal dosimetry for VMAT using integrated images obtained during treatment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bedford, James L., E-mail: James.Bedford@icr.ac.uk; Hanson, Ian M.; Hansen, Vibeke Nordmark
2014-02-15
Purpose: Portal dosimetry provides an accurate and convenient means of verifying dose delivered to the patient. A simple method for carrying out portal dosimetry for volumetric modulated arc therapy (VMAT) is described, together with phantom measurements demonstrating the validity of the approach. Methods: Portal images were predicted by projecting dose in the isocentric plane through to the portal image plane, with exponential attenuation and convolution with a double-Gaussian scatter function. Appropriate parameters for the projection were selected by fitting the calculation model to portal images measured on an iViewGT portal imager (Elekta AB, Stockholm, Sweden) for a variety of phantommore » thicknesses and field sizes. This model was then used to predict the portal image resulting from each control point of a VMAT arc. Finally, all these control point images were summed to predict the overall integrated portal image for the whole arc. The calculated and measured integrated portal images were compared for three lung and three esophagus plans delivered to a thorax phantom, and three prostate plans delivered to a homogeneous phantom, using a gamma index for 3% and 3 mm. A 0.6 cm{sup 3} ionization chamber was used to verify the planned isocentric dose. The sensitivity of this method to errors in monitor units, field shaping, gantry angle, and phantom position was also evaluated by means of computer simulations. Results: The calculation model for portal dose prediction was able to accurately compute the portal images due to simple square fields delivered to solid water phantoms. The integrated images of VMAT treatments delivered to phantoms were also correctly predicted by the method. The proportion of the images with a gamma index of less than unity was 93.7% ± 3.0% (1SD) and the difference between isocenter dose calculated by the planning system and measured by the ionization chamber was 0.8% ± 1.0%. The method was highly sensitive to errors in monitor units and field shape, but less sensitive to errors in gantry angle or phantom position. Conclusions: This method of predicting integrated portal images provides a convenient means of verifying dose delivered using VMAT, with minimal image acquisition and data processing requirements.« less
NASA Astrophysics Data System (ADS)
Liu, Hai-Zheng; Shi, Ze-Lin; Feng, Bin; Hui, Bin; Zhao, Yao-Hong
2016-03-01
Integrating microgrid polarimeters on focal plane array (FPA) of an infrared detector causes non-uniformity of polarization response. In order to reduce the effect of polarization non-uniformity, this paper constructs an experimental setup for capturing raw flat-field images and proposes a procedure for acquiring non-uniform calibration (NUC) matrix and calibrating raw polarization images. The proposed procedure takes the incident radiation as a polarization vector and offers a calibration matrix for each pixel. Both our matrix calibration and two-point calibration are applied to our mid-wavelength infrared (MWIR) polarization imaging system with integrated microgrid polarimeters. Compared with two point calibration, our matrix calibration reduces non-uniformity by 30 40% under condition of flat-field data test with polarization. The ourdoor scene observation experiment indicates that our calibration can effectively reduce polarization non-uniformity and improve the image quality of our MWIR polarization imaging system.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-01-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
NASA Astrophysics Data System (ADS)
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei
2016-11-01
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.
Integrating Depth and Image Sequences for Planetary Rover Mapping Using Rgb-D Sensor
NASA Astrophysics Data System (ADS)
Peng, M.; Wan, W.; Xing, Y.; Wang, Y.; Liu, Z.; Di, K.; Zhao, Q.; Teng, B.; Mao, X.
2018-04-01
RGB-D camera allows the capture of depth and color information at high data rates, and this makes it possible and beneficial integrate depth and image sequences for planetary rover mapping. The proposed mapping method consists of three steps. First, the strict projection relationship among 3D space, depth data and visual texture data is established based on the imaging principle of RGB-D camera, then, an extended bundle adjustment (BA) based SLAM method with integrated 2D and 3D measurements is applied to the image network for high-precision pose estimation. Next, as the interior and exterior elements of RGB images sequence are available, dense matching is completed with the CMPMVS tool. Finally, according to the registration parameters after ICP, the 3D scene from RGB images can be registered to the 3D scene from depth images well, and the fused point cloud can be obtained. Experiment was performed in an outdoor field to simulate the lunar surface. The experimental results demonstrated the feasibility of the proposed method.
Hybrid fluorescence and electron cryo-microscopy for simultaneous electron and photon imaging.
Iijima, Hirofumi; Fukuda, Yoshiyuki; Arai, Yoshihiro; Terakawa, Susumu; Yamamoto, Naoki; Nagayama, Kuniaki
2014-01-01
Integration of fluorescence light and transmission electron microscopy into the same device would represent an important advance in correlative microscopy, which traditionally involves two separate microscopes for imaging. To achieve such integration, the primary technical challenge that must be solved regards how to arrange two objective lenses used for light and electron microscopy in such a manner that they can properly focus on a single specimen. To address this issue, both lateral displacement of the specimen between two lenses and specimen rotation have been proposed. Such movement of the specimen allows sequential collection of two kinds of microscopic images of a single target, but prevents simultaneous imaging. This shortcoming has been made up by using a simple optical device, a reflection mirror. Here, we present an approach toward the versatile integration of fluorescence and electron microscopy for simultaneous imaging. The potential of simultaneous hybrid microscopy was demonstrated by fluorescence and electron sequential imaging of a fluorescent protein expressed in cells and cathodoluminescence imaging of fluorescent beads. Copyright © 2013 Elsevier Inc. All rights reserved.
Integrated optical 3D digital imaging based on DSP scheme
NASA Astrophysics Data System (ADS)
Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.
2008-03-01
We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.
NASA Astrophysics Data System (ADS)
Goh, Sheng-Yang M.; Irimia, Andrei; Vespa, Paul M.; Van Horn, John D.
2016-03-01
In traumatic brain injury (TBI) and intracerebral hemorrhage (ICH), the heterogeneity of lesion sizes and types necessitates a variety of imaging modalities to acquire a comprehensive perspective on injury extent. Although it is advantageous to combine imaging modalities and to leverage their complementary benefits, there are difficulties in integrating information across imaging types. Thus, it is important that efforts be dedicated to the creation and sustained refinement of resources for multimodal data integration. Here, we propose a novel approach to the integration of neuroimaging data acquired from human patients with TBI/ICH using various modalities; we also demonstrate the integrated use of multimodal magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) data for TBI analysis based on both visual observations and quantitative metrics. 3D models of healthy-appearing tissues and TBIrelated pathology are generated, both of which are derived from multimodal imaging data. MRI volumes acquired using FLAIR, SWI, and T2 GRE are used to segment pathology. Healthy tissues are segmented using user-supervised tools, and results are visualized using a novel graphical approach called a `connectogram', where brain connectivity information is depicted within a circle of radially aligned elements. Inter-region connectivity and its strength are represented by links of variable opacities drawn between regions, where opacity reflects the percentage longitudinal change in brain connectivity density. Our method for integrating, analyzing and visualizing structural brain changes due to TBI and ICH can promote knowledge extraction and enhance the understanding of mechanisms underlying recovery.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V
2015-08-24
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.
Gaussian Process Interpolation for Uncertainty Estimation in Image Registration
Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William
2014-01-01
Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127
Dual light field and polarization imaging using CMOS diffractive image sensors.
Jayasuriya, Suren; Sivaramakrishnan, Sriram; Chuang, Ellen; Guruaribam, Debashree; Wang, Albert; Molnar, Alyosha
2015-05-15
In this Letter we present, to the best of our knowledge, the first integrated CMOS image sensor that can simultaneously perform light field and polarization imaging without the use of external filters or additional optical elements. Previous work has shown how photodetectors with two stacks of integrated metal gratings above them (called angle sensitive pixels) diffract light in a Talbot pattern to capture four-dimensional light fields. We show, in addition to diffractive imaging, that these gratings polarize incoming light and characterize the response of these sensors to polarization and incidence angle. Finally, we show two applications of polarization imaging: imaging stress-induced birefringence and identifying specular reflections in scenes to improve light field algorithms for these scenes.
Iterative CT shading correction with no prior information
NASA Astrophysics Data System (ADS)
Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye
2015-11-01
Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical and attractive as a general solution to CT shading correction.
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
SU-E-T-171: Missing Dose in Integrated EPID Images.
King, B; Seymour, E; Nitschke, K
2012-06-01
A dosimetric artifact has been observed with Varian EPIDs in the presence of beam interrupts. This work determines the root cause and significance of this artifact. Integrated mode EPID images were acquired both with and without a manual beam interrupt for rectangular, sliding gap IMRT fields. Simultaneously, the individual frames were captured on a separate computer using a frame-grabber system. Synchronization of the individual frames with the integrated images allowed the determination of precisely how the EPID behaved during regular operation as well as when a beam interrupt was triggered. The ability of the EPID to reliably monitor a treatment in the presence of beam interrupts was tested by comparing the difference between the interrupt and non-interrupt images. The interrupted images acquired in integrated acquisition mode displayed unanticipated behaviour in the region of the image where the leaves were located when the beam interrupt was triggered. Differences greater than 5% were observed as a result of the interrupt in some cases, with the discrepancies occurring in a non-uniform manner across the imager. The differences measured were not repeatable from one measurement to another. Examination of the individual frames showed that the EPID was consistently losing a small amount of dose at the termination of every exposure. Inclusion of one additional frame in every image rectified the unexpected behaviour, reducing the differences to 1% or less. Although integrated EPID images nominally capture the entire dose delivered during an exposure, a small amount of dose is consistently being lost at the end of every exposure. The amount of missing dose is random, depending on the exact beam termination time within a frame. Inclusion of an extra frame at the end of each exposure effectively rectifies the problem, making the EPID more suitable for clinical dosimetry applications. The authors received support from Varian Medical Systems in the form of software and equipment loans as well as technical support. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.; Guld, Mark O.; Thies, Christian; Fischer, Benedikt; Keysers, Daniel; Kohnen, Michael; Schubert, Henning; Wein, Berthold B.
2003-05-01
Picture archiving and communication systems (PACS) aim to efficiently provide the radiologists with all images in a suitable quality for diagnosis. Modern standards for digital imaging and communication in medicine (DICOM) comprise alphanumerical descriptions of study, patient, and technical parameters. Currently, this is the only information used to select relevant images within PACS. Since textual descriptions insufficiently describe the great variety of details in medical images, content-based image retrieval (CBIR) is expected to have a strong impact when integrated into PACS. However, existing CBIR approaches usually are limited to a distinct modality, organ, or diagnostic study. In this state-of-the-art report, we present first results implementing a general approach to content-based image retrieval in medical applications (IRMA) and discuss its integration into PACS environments. Usually, a PACS consists of a DICOM image server and several DICOM-compliant workstations, which are used by radiologists for reading the images and reporting the findings. Basic IRMA components are the relational database, the scheduler, and the web server, which all may be installed on the DICOM image server, and the IRMA daemons running on distributed machines, e.g., the radiologists" workstations. These workstations can also host the web-based front-ends of IRMA applications. Integrating CBIR and PACS, a special focus is put on (a) location and access transparency for data, methods, and experiments, (b) replication transparency for methods in development, (c) concurrency transparency for job processing and feature extraction, (d) system transparency at method implementation time, and (e) job distribution transparency when issuing a query. Transparent integration will have a certain impact on diagnostic quality supporting both evidence-based medicine and case-based reasoning.
NASA Astrophysics Data System (ADS)
Li, Jianwei D.; Malone, Joseph D.; El-Haddad, Mohamed T.; Arquitola, Amber M.; Joos, Karen M.; Patel, Shriji N.; Tao, Yuankai K.
2017-02-01
Surgical interventions for ocular diseases involve manipulations of semi-transparent structures in the eye, but limited visualization of these tissue layers remains a critical barrier to developing novel surgical techniques and improving clinical outcomes. We addressed limitations in image-guided ophthalmic microsurgery by using microscope-integrated multimodal intraoperative swept-source spectrally encoded scanning laser ophthalmoscopy and optical coherence tomography (iSS-SESLO-OCT). We previously demonstrated in vivo human ophthalmic imaging using SS-SESLO-OCT, which enabled simultaneous acquisition of en face SESLO images with every OCT cross-section. Here, we integrated our new 400 kHz iSS-SESLO-OCT, which used a buffered Axsun 1060 nm swept-source, with a surgical microscope and TrueVision stereoscopic viewing system to provide image-based feedback. In vivo human imaging performance was demonstrated on a healthy volunteer, and simulated surgical maneuvers were performed in ex vivo porcine eyes. Denselysampled static volumes and volumes subsampled at 10 volumes-per-second were used to visualize tissue deformations and surgical dynamics during corneal sweeps, compressions, and dissections, and retinal sweeps, compressions, and elevations. En face SESLO images enabled orientation and co-registration with the widefield surgical microscope view while OCT imaging enabled depth-resolved visualization of surgical instrument positions relative to anatomic structures-of-interest. TrueVision heads-up display allowed for side-by-side viewing of the surgical field with SESLO and OCT previews for real-time feedback, and we demonstrated novel integrated segmentation overlays for augmented-reality surgical guidance. Integration of these complementary imaging modalities may benefit surgical outcomes by enabling real-time intraoperative visualization of surgical plans, instrument positions, tissue deformations, and image-based surrogate biomarkers correlated with completion of surgical goals.
Single Photon Counting Performance and Noise Analysis of CMOS SPAD-Based Image Sensors
Dutton, Neale A. W.; Gyongy, Istvan; Parmesan, Luca; Henderson, Robert K.
2016-01-01
SPAD-based solid state CMOS image sensors utilising analogue integrators have attained deep sub-electron read noise (DSERN) permitting single photon counting (SPC) imaging. A new method is proposed to determine the read noise in DSERN image sensors by evaluating the peak separation and width (PSW) of single photon peaks in a photon counting histogram (PCH). The technique is used to identify and analyse cumulative noise in analogue integrating SPC SPAD-based pixels. The DSERN of our SPAD image sensor is exploited to confirm recent multi-photon threshold quanta image sensor (QIS) theory. Finally, various single and multiple photon spatio-temporal oversampling techniques are reviewed. PMID:27447643
2016-10-01
including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and...SUBJECT TERMS Gulf war illness; magnetic resonance imaging; dopamine; diffusion tensor imaging 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...nigra, basal ganglia and cortex as markers of integrity of the nigro-striatal dopaminergic pathway using high resolution diffusion tensor imaging (DTI
Large-Scale Document Automation: The Systems Integration Issue.
ERIC Educational Resources Information Center
Kalthoff, Robert J.
1985-01-01
Reviews current technologies for electronic imaging and its recording and transmission, including digital recording, optical data disks, automated image-delivery micrographics, high-density-magnetic recording, and new developments in telecommunications and computers. The role of the document automation systems integrator, who will bring these…
Analytical models integrated with satellite images for optimized pest management
USDA-ARS?s Scientific Manuscript database
The global field protection (GFP) was developed to protect and optimize pest management resources integrating satellite images for precise field demarcation with physical models of controlled release devices of pesticides to protect large fields. The GFP was implemented using a graphical user interf...
OntoVIP: an ontology for the annotation of object models used for medical image simulation.
Gibaud, Bernard; Forestier, Germain; Benoit-Cattin, Hugues; Cervenansky, Frédéric; Clarysse, Patrick; Friboulet, Denis; Gaignard, Alban; Hugonnard, Patrick; Lartizien, Carole; Liebgott, Hervé; Montagnat, Johan; Tabary, Joachim; Glatard, Tristan
2014-12-01
This paper describes the creation of a comprehensive conceptualization of object models used in medical image simulation, suitable for major imaging modalities and simulators. The goal is to create an application ontology that can be used to annotate the models in a repository integrated in the Virtual Imaging Platform (VIP), to facilitate their sharing and reuse. Annotations make the anatomical, physiological and pathophysiological content of the object models explicit. In such an interdisciplinary context we chose to rely on a common integration framework provided by a foundational ontology, that facilitates the consistent integration of the various modules extracted from several existing ontologies, i.e. FMA, PATO, MPATH, RadLex and ChEBI. Emphasis is put on methodology for achieving this extraction and integration. The most salient aspects of the ontology are presented, especially the organization in model layers, as well as its use to browse and query the model repository. Copyright © 2014 Elsevier Inc. All rights reserved.
Chao, Hui-Mei; Hsu, Chin-Ming; Miaou, Shaou-Gang
2002-03-01
A data-hiding technique called the "bipolar multiple-number base" was developed to provide capabilities of authentication, integration, and confidentiality for an electronic patient record (EPR) transmitted among hospitals through the Internet. The proposed technique is capable of hiding those EPR related data such as diagnostic reports, electrocardiogram, and digital signatures from doctors or a hospital into a mark image. The mark image could be the mark of a hospital used to identify the origin of an EPR. Those digital signatures from doctors and a hospital could be applied for the EPR authentication. Thus, different types of medical data can be integrated into the same mark image. The confidentiality is ultimately achieved by decrypting the EPR related data and digital signatures with an exact copy of the original mark image. The experimental results validate the integrity and the invisibility of the hidden EPR related data. This newly developed technique allows all of the hidden data to be separated and restored perfectly by authorized users.
Fang, Yu-Hua Dean; Asthana, Pravesh; Salinas, Cristian; Huang, Hsuan-Ming; Muzic, Raymond F
2010-01-01
An integrated software package, Compartment Model Kinetic Analysis Tool (COMKAT), is presented in this report. COMKAT is an open-source software package with many functions for incorporating pharmacokinetic analysis in molecular imaging research and has both command-line and graphical user interfaces. With COMKAT, users may load and display images, draw regions of interest, load input functions, select kinetic models from a predefined list, or create a novel model and perform parameter estimation, all without having to write any computer code. For image analysis, COMKAT image tool supports multiple image file formats, including the Digital Imaging and Communications in Medicine (DICOM) standard. Image contrast, zoom, reslicing, display color table, and frame summation can be adjusted in COMKAT image tool. It also displays and automatically registers images from 2 modalities. Parametric imaging capability is provided and can be combined with the distributed computing support to enhance computation speeds. For users without MATLAB licenses, a compiled, executable version of COMKAT is available, although it currently has only a subset of the full COMKAT capability. Both the compiled and the noncompiled versions of COMKAT are free for academic research use. Extensive documentation, examples, and COMKAT itself are available on its wiki-based Web site, http://comkat.case.edu. Users are encouraged to contribute, sharing their experience, examples, and extensions of COMKAT. With integrated functionality specifically designed for imaging and kinetic modeling analysis, COMKAT can be used as a software environment for molecular imaging and pharmacokinetic analysis.
Liu, Li; Chen, Weiping; Nie, Min; Zhang, Fengjuan; Wang, Yu; He, Ailing; Wang, Xiaonan; Yan, Gen
2016-11-01
To handle the emergence of the regional healthcare ecosystem, physicians and surgeons in various departments and healthcare institutions must process medical images securely, conveniently, and efficiently, and must integrate them with electronic medical records (EMRs). In this manuscript, we propose a software as a service (SaaS) cloud called the iMAGE cloud. A three-layer hybrid cloud was created to provide medical image processing services in the smart city of Wuxi, China, in April 2015. In the first step, medical images and EMR data were received and integrated via the hybrid regional healthcare network. Then, traditional and advanced image processing functions were proposed and computed in a unified manner in the high-performance cloud units. Finally, the image processing results were delivered to regional users using the virtual desktop infrastructure (VDI) technology. Security infrastructure was also taken into consideration. Integrated information query and many advanced medical image processing functions-such as coronary extraction, pulmonary reconstruction, vascular extraction, intelligent detection of pulmonary nodules, image fusion, and 3D printing-were available to local physicians and surgeons in various departments and healthcare institutions. Implementation results indicate that the iMAGE cloud can provide convenient, efficient, compatible, and secure medical image processing services in regional healthcare networks. The iMAGE cloud has been proven to be valuable in applications in the regional healthcare system, and it could have a promising future in the healthcare system worldwide.
Research of an optimization design method of integral imaging three-dimensional display system
NASA Astrophysics Data System (ADS)
Gao, Hui; Yan, Zhiqiang; Wen, Jun; Jiang, Guanwu
2016-03-01
The information warfare needs a highly transparent environment of battlefield, it follows that true three-dimensional display technology has obvious advantages than traditional display technology in the current field of military science and technology. It also focuses on the research progress of lens array imaging technology and aims at what restrict the development of integral imaging, main including low spatial resolution, narrow depth range and small viewing angle. This paper summarizes the principle, characteristics and development history of the integral imaging. A variety of methods are compared and analyzed that how to improve the resolution, extend depth of field, increase scope and eliminate the artifact aiming at problems currently. And makes a discussion about the experimental results of the research, comparing the display performance of different methods.
High-Speed Binary-Output Image Sensor
NASA Technical Reports Server (NTRS)
Fossum, Eric; Panicacci, Roger A.; Kemeny, Sabrina E.; Jones, Peter D.
1996-01-01
Photodetector outputs digitized by circuitry on same integrated-circuit chip. Developmental special-purpose binary-output image sensor designed to capture up to 1,000 images per second, with resolution greater than 10 to the 6th power pixels per image. Lower-resolution but higher-frame-rate prototype of sensor contains 128 x 128 array of photodiodes on complementary metal oxide/semiconductor (CMOS) integrated-circuit chip. In application for which it is being developed, sensor used to examine helicopter oil to determine whether amount of metal and sand in oil sufficient to warrant replacement.
Spatial data software integration - Merging CAD/CAM/mapping with GIS and image processing
NASA Technical Reports Server (NTRS)
Logan, Thomas L.; Bryant, Nevin A.
1987-01-01
The integration of CAD/CAM/mapping with image processing using geographic information systems (GISs) as the interface is examined. Particular emphasis is given to the development of software interfaces between JPL's Video Image Communication and Retrieval (VICAR)/Imaged Based Information System (IBIS) raster-based GIS and the CAD/CAM/mapping system. The design and functions of the VICAR and IBIS are described. Vector data capture and editing are studied. Various software programs for interfacing between the VICAR/IBIS and CAD/CAM/mapping are presented and analyzed.
A threshold selection method based on edge preserving
NASA Astrophysics Data System (ADS)
Lou, Liantang; Dan, Wei; Chen, Jiaqi
2015-12-01
A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.
Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT
Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah
2015-01-01
Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965
Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia
2013-01-01
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes.
Gu, Yuanyuan; Mai, Xiaoqin; Luo, Yue-jia
2013-01-01
The decoding of social signals from nonverbal cues plays a vital role in the social interactions of socially gregarious animals such as humans. Because nonverbal emotional signals from the face and body are normally seen together, it is important to investigate the mechanism underlying the integration of emotional signals from these two sources. We conducted a study in which the time course of the integration of facial and bodily expressions was examined via analysis of event-related potentials (ERPs) while the focus of attention was manipulated. Distinctive integrating features were found during multiple stages of processing. In the first stage, threatening information from the body was extracted automatically and rapidly, as evidenced by enhanced P1 amplitudes when the subjects viewed compound face-body images with fearful bodies compared with happy bodies. In the second stage, incongruency between emotional information from the face and the body was detected and captured by N2. Incongruent compound images elicited larger N2s than did congruent compound images. The focus of attention modulated the third stage of integration. When the subjects' attention was focused on the face, images with congruent emotional signals elicited larger P3s than did images with incongruent signals, suggesting more sustained attention and elaboration of congruent emotional information extracted from the face and body. On the other hand, when the subjects' attention was focused on the body, images with fearful bodies elicited larger P3s than did images with happy bodies, indicating more sustained attention and elaboration of threatening information from the body during evaluative processes. PMID:23935825
Automated baseline change detection -- Phases 1 and 2. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byler, E.
1997-10-31
The primary objective of this project is to apply robotic and optical sensor technology to the operational inspection of mixed toxic and radioactive waste stored in barrels, using Automated Baseline Change Detection (ABCD), based on image subtraction. Absolute change detection is based on detecting any visible physical changes, regardless of cause, between a current inspection image of a barrel and an archived baseline image of the same barrel. Thus, in addition to rust, the ABCD system can also detect corrosion, leaks, dents, and bulges. The ABCD approach and method rely on precise camera positioning and repositioning relative to the barrelmore » and on feature recognition in images. The ABCD image processing software was installed on a robotic vehicle developed under a related DOE/FETC contract DE-AC21-92MC29112 Intelligent Mobile Sensor System (IMSS) and integrated with the electronics and software. This vehicle was designed especially to navigate in DOE Waste Storage Facilities. Initial system testing was performed at Fernald in June 1996. After some further development and more extensive integration the prototype integrated system was installed and tested at the Radioactive Waste Management Facility (RWMC) at INEEL beginning in April 1997 through the present (November 1997). The integrated system, composed of ABCD imaging software and IMSS mobility base, is called MISS EVE (Mobile Intelligent Sensor System--Environmental Validation Expert). Evaluation of the integrated system in RWMC Building 628, containing approximately 10,000 drums, demonstrated an easy to use system with the ability to properly navigate through the facility, image all the defined drums, and process the results into a report delivered to the operator on a GUI interface and on hard copy. Further work is needed to make the brassboard system more operationally robust.« less
Integrated semiconductor optical sensors for chronic, minimally-invasive imaging of brain function.
Lee, Thomas T; Levi, Ofer; Cang, Jianhua; Kaneko, Megumi; Stryker, Michael P; Smith, Stephen J; Shenoy, Krishna V; Harris, James S
2006-01-01
Intrinsic optical signal (IOS) imaging is a widely accepted technique for imaging brain activity. We propose an integrated device consisting of interleaved arrays of gallium arsenide (GaAs) based semiconductor light sources and detectors operating at telecommunications wavelengths in the near-infrared. Such a device will allow for long-term, minimally invasive monitoring of neural activity in freely behaving subjects, and will enable the use of structured illumination patterns to improve system performance. In this work we describe the proposed system and show that near-infrared IOS imaging at wavelengths compatible with semiconductor devices can produce physiologically significant images in mice, even through skull.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schaly, B; Gaede, S; Department of Medical Biophysics, Western University, London, ON
2015-06-15
Purpose: To investigate the clinical utility of on-line verification of respiratory gated VMAT dosimetry during treatment. Methods: Portal dose images were acquired during treatment in integrated mode on a Varian TrueBeam (v. 1.6) linear accelerator for gated lung and liver patients that used flattening filtered beams. The source to imager distance (SID) was set to 160 cm to ensure imager clearance in case the isocenter was off midline. Note that acquisition of integrated images resulted in no extra dose to the patient. Fraction 1 was taken as baseline and all portal dose images were compared to that of the baseline,more » where the gamma comparison and dose difference were used to measure day-to-day exit dose variation. All images were analyzed in the Portal Dosimetry module of Aria (v. 10). The portal imager on the TrueBeam was calibrated by following the instructions for dosimetry calibration in service mode, where we define 1 calibrated unit (CU) equal to 1 Gy for 10×10 cm field size at 100 cm SID. This reference condition was measured frequently to verify imager calibration. Results: The gamma value (3%, 3 mm, 5% threshold) ranged between 92% and 100% for the lung and liver cases studied. The exit dose can vary by as much as 10% of the maximum dose for an individual fraction. The integrated images combined with the information given by the corresponding on-line soft tissue matched cone-beam computed tomography (CBCT) images were useful in explaining dose variation. For gated lung treatment, dose variation was mainly due to the diaphragm position. For gated liver treatment, the dose variation was due to both diaphragm position and weight loss. Conclusion: Integrated images can be useful in verifying dose delivery consistency during respiratory gated VMAT, although the CBCT information is needed to explain dose differences due to anatomical changes.« less
Image dissemination and archiving.
Robertson, Ian
2007-08-01
Images generated as part of the sonographic examination are an integral part of the medical record and must be retained according to local regulations. The standard medical image format, known as DICOM (Digital Imaging and COmmunications in Medicine) makes it possible for images from many different imaging modalities, including ultrasound, to be distributed via a standard internet network to distant viewing workstations and a central archive in an almost seamless fashion. The DICOM standard is a truly universal standard for the dissemination of medical images. When purchasing an ultrasound unit, the consumer should research the unit's capacity to generate images in a DICOM format, especially if one wishes interconnectivity with viewing workstations and an image archive that stores other medical images. PACS, an acronym for Picture Archive and Communication System refers to the infrastructure that links modalities, workstations, the image archive, and the medical record information system into an integrated system, allowing for efficient electronic distribution and storage of medical images and access to medical record data.
Berg, W A; Caskey, C I; Hamper, U M; Kuhlman, J E; Anderson, N D; Chang, B W; Sheth, S; Zerhouni, E A
1995-10-01
To evaluate the accuracy of magnetic resonance (MR) and ultrasound (US) criteria for breast implant integrity. One hundred twenty-two single-lumen silicone breast implants and 22 bilumen implants were evaluated with surface coil MR imaging and US and surgically removed. MR criteria for implant failure were a collapsed implant shell ("linguine sign"), foci of silicone outside the shell ("noose sign"), and extracapsular gel, US criteria were collapsed shell, low-level echoes within the gel, and "snowstorm" echoes of extracapsular silicone. Among single-lumen implants, MR imaging depicted 39 of 40 ruptures, 14 of 28 with minimal leakage; 49 of 54 intact implants were correctly interpreted. US depicted 26 of 40 ruptured implants, four of 28 with minimal leakage, and 30 of 54 intact implants. Among bilumen implants, MR imaging depicted four of five implants with rupture of both lumina and nine of 10 as intact; US depicted one rupture and helped identify two of 10 as intact. Mammography accurately depicted the status of 29 of 30 bilumen implants with MR imaging correlation. MR imaging depicts implant integrity more accurately than US; neither method reliably depicts minimal leakage with shell collapse. Mammography is useful in screening bilumen implant integrity.
The effect of human image in B2C website design: an eye-tracking study
NASA Astrophysics Data System (ADS)
Wang, Qiuzhen; Yang, Yi; Wang, Qi; Ma, Qingguo
2014-09-01
On B2C shopping websites, effective visual designs can bring about consumers' positive emotional experience. From this perspective, this article developed a research model to explore the impact of human image as a visual element on consumers' online shopping emotions and subsequent attitudes towards websites. This study conducted an eye-tracking experiment to collect both eye movement data and questionnaire data to test the research model. Questionnaire data analysis showed that product pictures combined with human image induced positive emotions among participants, thus promoting their attitudes towards online shopping websites. Specifically, product pictures with human image first produced higher levels of image appeal and perceived social presence, thus stimulating higher levels of enjoyment and subsequent positive attitudes towards the websites. Moreover, a moderating effect of product type was demonstrated on the relationship between the presence of human image and the level of image appeal. Specifically, human image significantly increased the level of image appeal when integrated in entertainment product pictures while this relationship was not significant in terms of utilitarian products. Eye-tracking data analysis further supported these results and provided plausible explanations. The presence of human image significantly increased the pupil size of participants regardless of product types. For entertainment products, participants paid more attention to product pictures integrated with human image whereas for utilitarian products more attention was paid to functional information of products than to product pictures no matter whether or not integrated with human image.
Enhancing Ground Based Telescope Performance with Image Processing
2013-11-13
driven by the need to detect small faint objects with relatively short integration times to avoid streaking of the satellite image across multiple...the time right before the eclipse. The orbital elements of the satellite were entered into the SST’s tracking system, so that the SST could be...short integration times , thereby avoiding streaking of the satellite image across multiple CCD pixels so that the objects are suitably modeled as point
Network of fully integrated multispecialty hospital imaging systems
NASA Astrophysics Data System (ADS)
Dayhoff, Ruth E.; Kuzmak, Peter M.
1994-05-01
The Department of Veterans Affairs (VA) DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including radiology, cardiology, gastroenterology, pathology, dermatology, hematology, surgery, podiatry, dental clinic, and emergency room. These images are displayed on workstations located throughout a medical center. All images are managed by the VA's hospital information system, allowing integrated displays of text and image data across medical specialties. Clinicians can view screens of `thumbnail' images for all studies or procedures performed on a selected patient. Two VA medical centers currently have DHCP Imaging Systems installed, and others are planned. All VA medical centers and other VA facilities are connected by a wide area packet-switched network. The VA's electronic mail software has been modified to allow inclusion of binary data such as images in addition to the traditional text data. Testing of this multimedia electronic mail system is underway for medical teleconsultation.
Digital image envelope: method and evaluation
NASA Astrophysics Data System (ADS)
Huang, H. K.; Cao, Fei; Zhou, Michael Z.; Mogel, Greg T.; Liu, Brent J.; Zhou, Xiaoqiang
2003-05-01
Health data security, characterized in terms of data privacy, authenticity, and integrity, is a vital issue when digital images and other patient information are transmitted through public networks in telehealth applications such as teleradiology. Mandates for ensuring health data security have been extensively discussed (for example The Health Insurance Portability and Accountability Act, HIPAA) and health informatics guidelines (such as the DICOM standard) are beginning to focus on issues of data continue to be published by organizing bodies in healthcare; however, there has not been a systematic method developed to ensure data security in medical imaging Because data privacy and authenticity are often managed primarily with firewall and password protection, we have focused our research and development on data integrity. We have developed a systematic method of ensuring medical image data integrity across public networks using the concept of the digital envelope. When a medical image is generated regardless of the modality, three processes are performed: the image signature is obtained, the DICOM image header is encrypted, and a digital envelope is formed by combining the signature and the encrypted header. The envelope is encrypted and embedded in the original image. This assures the security of both the image and the patient ID. The embedded image is encrypted again and transmitted across the network. The reverse process is performed at the receiving site. The result is two digital signatures, one from the original image before transmission, and second from the image after transmission. If the signatures are identical, there has been no alteration of the image. This paper concentrates in the method and evaluation of the digital image envelope.
A 2D/3D hybrid integral imaging display by using fast switchable hexagonal liquid crystal lens array
NASA Astrophysics Data System (ADS)
Lee, Hsin-Hsueh; Huang, Ping-Ju; Wu, Jui-Yi; Hsieh, Po-Yuan; Huang, Yi-Pai
2017-05-01
The paper proposes a new display which could switch 2D and 3D images on a monitor, and we call it as Hybrid Display. In 3D display technologies, the reduction of image resolution is still an important issue. The more angle information offer to the observer, the less spatial resolution would offer to image resolution because of the fixed panel resolution. Take it for example, in the integral photography system, the part of image without depth, like background, will reduce its resolution by transform from 2D to 3D image. Therefore, we proposed a method by using liquid crystal component to quickly switch the 2D image and 3D image. Meanwhile, the 2D image is set as a background to compensate the resolution.. In the experiment, hexagonal liquid crystal lens array would be used to take the place of fixed lens array. Moreover, in order to increase lens power of the hexagonal LC lens array, we applied high resistance (Hi-R) layer structure on the electrode. Hi-R layer would make the gradient electric field and affect the lens profile. Also, we use panel with 801 PPI to display the integral image in our system. Hence, the consequence of full resolution 2D background with the 3D depth object forms the Hybrid Display.
NASA Astrophysics Data System (ADS)
Nooshabadi, Fatemeh; Yang, Hee-Jeong; Cheng, Yunfeng; Xie, Hexin; Rao, Jianghong; Cirillo, Jeffrey D.; Maitland, Kristen C.
2016-03-01
Tuberculosis (TB), caused by Mycobacterium tuberculosis (Mtb), remains one of the most frequent causes of death worldwide. The slow growth rate of Mtb limits progress toward understanding tuberculosis including diagnosis of infections and evaluating therapeutic efficacy. Development of near-infrared (NIR) β-lactamase (BlaC)-specific fluorogenic substrate has made a significant breakthrough in the whole-animal imaging to detect Mtb infection. The reporter enzyme fluorescence (REF) system using a BlaC-specific fluorogenic substrate has improved the detection sensitivity in whole-animal optical imaging down to ~104 colony forming units (CFU) of bacteria, about 100-fold improvement over recombinant strains. However, improvement of detection sensitivity is strongly needed for clinical diagnosis of early stage infection at greater tissue depth. In order to improve detection sensitivity, we have integrated a fiber-based microendoscpe into a whole-animal imaging system to transmit the excitation light from the fiber bundle to the fluorescent target directly and measure fluorescent level using BlaC-specific REF substrate in the mouse lung. REF substrate, CNIR800, was delivered via aerosol route to the pulmonary infected mice with M. bovis BCG strain at 24 hours post-infection and groups of mice were imaged at 1-4 hours post-administration of the substrate using the integrated imaging system. In this study we evaluated the kinetics of CNIR800 substrate using REF technology using the integrated imaging system. Integration of these technologies has great promise for improved detection sensitivity allowing pre-clinical imaging for evaluation of new therapeutic agents.
CTIO Infrared Imager Exposure Time Calculator Note: ISPI throughput values updated 12 March 2005 S/N ratio 10 Exposure Time 1 (seconds) Calculate S/N for specified Total Integration Time Calculate Total Integration Time to reach Desired S/N Submit Exposure Calculation Request [CTIO Home] [CTIO IR
77 FR 5033 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-01
... personal privacy. Name of Committee: Cardiovascular and Respiratory Sciences Integrated Review Group, Clinical and Integrative Cardiovascular Sciences Study Section. Date: February 23-24, 2012. Time: 8 a.m. to...: Early Phase Clinical Trials in Imaging and Image- Guided Interventions. Date: February 28, 2012. Time: 1...
Familiari, Giuseppe; Relucenti, Michela; Heyn, Rosemarie; Baldini, Rossella; D'Andrea, Giancarlo; Familiari, Pietro; Bozzao, Alessandro; Raco, Antonino
2013-01-01
Neuroanatomy is considered to be one of the most difficult anatomical subjects for students. To provide motivation and improve learning outcomes in this area, clinical cases and neurosurgical images from diffusion tensor imaging (DTI) tractographies produced using an intraoperative magnetic resonance imaging apparatus (MRI/DTI) were presented and discussed during integrated second-year neuroanatomy, neuroradiology, and neurosurgery lectures over the 2008-2011 period. Anonymous questionnaires, evaluated according to the Likert scale, demonstrated that students appreciated this teaching procedure. Academic performance (examination grades for neuroanatomy) of the students who attended all integrated lectures of neuroanatomy, was slightly though significantly higher compared to that of students who attended these lectures only occasionally or not at all (P=0.04). Significantly better results were obtained during the national progress test (focusing on morphology) by students who attended the MRI/DTI-assisted lectures, compared to those who did so only in part or not at all, compared to the average student participating in the national test. These results were obtained by students attending the second, third and, in particular, the fourth year (P≤0.0001) courses during the three academic years mentioned earlier. This integrated neuroanatomy model can positively direct students in the direction of their future professional careers without any extra expense to the university. In conclusion, interactive learning tools, such as lectures integrated with intraoperative MRI/DTI images, motivate students to study and enhance their neuroanatomy education. Copyright © 2013 American Association of Anatomists.
Ehlers, Justis P; Srivastava, Sunil K; Feiler, Daniel; Noonan, Amanda I; Rollins, Andrew M; Tao, Yuankai K
2014-01-01
To demonstrate key integrative advances in microscope-integrated intraoperative optical coherence tomography (iOCT) technology that will facilitate adoption and utilization during ophthalmic surgery. We developed a second-generation prototype microscope-integrated iOCT system that interfaces directly with a standard ophthalmic surgical microscope. Novel features for improved design and functionality included improved profile and ergonomics, as well as a tunable lens system for optimized image quality and heads-up display (HUD) system for surgeon feedback. Novel material testing was performed for potential suitability for OCT-compatible instrumentation based on light scattering and transmission characteristics. Prototype surgical instruments were developed based on material testing and tested using the microscope-integrated iOCT system. Several surgical maneuvers were performed and imaged, and surgical motion visualization was evaluated with a unique scanning and image processing protocol. High-resolution images were successfully obtained with the microscope-integrated iOCT system with HUD feedback. Six semi-transparent materials were characterized to determine their attenuation coefficients and scatter density with an 830 nm OCT light source. Based on these optical properties, polycarbonate was selected as a material substrate for prototype instrument construction. A surgical pick, retinal forceps, and corneal needle were constructed with semi-transparent materials. Excellent visualization of both the underlying tissues and surgical instrument were achieved on OCT cross-section. Using model eyes, various surgical maneuvers were visualized, including membrane peeling, vessel manipulation, cannulation of the subretinal space, subretinal intraocular foreign body removal, and corneal penetration. Significant iterative improvements in integrative technology related to iOCT and ophthalmic surgery are demonstrated.
NASA Astrophysics Data System (ADS)
Brook, A.; Cristofani, E.; Vandewal, M.; Matheis, C.; Jonuscheit, J.; Beigang, R.
2012-05-01
The present study proposes a fully integrated, semi-automatic and near real-time mode-operated image processing methodology developed for Frequency-Modulated Continuous-Wave (FMCW) THz images with the center frequencies around: 100 GHz and 300 GHz. The quality control of aeronautics composite multi-layered materials and structures using Non-Destructive Testing is the main focus of this work. Image processing is applied on the 3-D images to extract useful information. The data is processed by extracting areas of interest. The detected areas are subjected to image analysis for more particular investigation managed by a spatial model. Finally, the post-processing stage examines and evaluates the spatial accuracy of the extracted information.
Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N.; Zawadzki, Robert J.; Sarunic, Marinko V.
2015-01-01
Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images. PMID:26368169
NASA Astrophysics Data System (ADS)
Welter, Petra; Deserno, Thomas M.; Gülpers, Ralph; Wein, Berthold B.; Grouls, Christoph; Günther, Rolf W.
2010-03-01
The large and continuously growing amount of medical image data demands access methods with regards to content rather than simple text-based queries. The potential benefits of content-based image retrieval (CBIR) systems for computer-aided diagnosis (CAD) are evident and have been approved. Still, CBIR is not a well-established part of daily routine of radiologists. We have already presented a concept of CBIR integration for the radiology workflow in accordance with the Integrating the Healthcare Enterprise (IHE) framework. The retrieval result is composed as a Digital Imaging and Communication in Medicine (DICOM) Structured Reporting (SR) document. The use of DICOM SR provides interchange with PACS archive and image viewer. It offers the possibility of further data mining and automatic interpretation of CBIR results. However, existing standard templates do not address the domain of CBIR. We present a design of a SR template customized for CBIR. Our approach is based on the DICOM standard templates and makes use of the mammography and chest CAD SR templates. Reuse of approved SR sub-trees promises a reliable design which is further adopted to the CBIR domain. We analyze the special CBIR requirements and integrate the new concept of similar images into our template. Our approach also includes the new concept of a set of selected images for defining the processed images for CBIR. A commonly accepted pre-defined template for the presentation and exchange of results in a standardized format promotes the widespread application of CBIR in radiological routine.
New regularization scheme for blind color image deconvolution
NASA Astrophysics Data System (ADS)
Chen, Li; He, Yu; Yap, Kim-Hui
2011-01-01
This paper proposes a new regularization scheme to address blind color image deconvolution. Color images generally have a significant correlation among the red, green, and blue channels. Conventional blind monochromatic deconvolution algorithms handle each color image channels independently, thereby ignoring the interchannel correlation present in the color images. In view of this, a unified regularization scheme for image is developed to recover edges of color images and reduce color artifacts. In addition, by using the color image properties, a spectral-based regularization operator is adopted to impose constraints on the blurs. Further, this paper proposes a reinforcement regularization framework that integrates a soft parametric learning term in addressing blind color image deconvolution. A blur modeling scheme is developed to evaluate the relevance of manifold parametric blur structures, and the information is integrated into the deconvolution scheme. An optimization procedure called alternating minimization is then employed to iteratively minimize the image- and blur-domain cost functions. Experimental results show that the method is able to achieve satisfactory restored color images under different blurring conditions.
SU-F-T-263: Dosimetric Characteristics of the Cine Acquisition Mode of An A-Si EPID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bawazeer, O; Deb, P; Sarasanandarajah, S
2016-06-15
Purpose: To investigate the dosimetric characteristics of Varian a-Si-500 electronic portal imaging device (EPID) operated in cine mode particularly considering linearity with delivered dose, dose rate, field size, phantom thickness, MLC speed and common IMRT fields. Methods: The EPID that attached to a Varian Clinac 21iX linear accelerator, was irradiated with 6 and 18 MV using 600 MU/min. Image acquisition is controlled by the IAS3 software, Trigger delay was 6 ms, BeamOnDelay and FrameStartDelay were zero. Different frame rates were utilized. Cine mode response was calculated using MATLAB as summation of mean pixel values in a region of interest ofmore » the acquired images. The performance of cine mode was compared to integrated mode and dose measurements in water using CC13 ionization chamber. Results: Figure1 illustrates that cine mode has nonlinear response for small MU, when delivering 10 MU was about 0.5 and 0.64 for 6 and 18 MV respectively. This is because the missing acquired images that were calculated around four images missing in each delivery. With the increase MU the response became linear and comparable with integrated mode and ionization chamber within 2%. Figure 2 shows that cine mode has comparable response with integrated mode and ionization chamber within 2% with changing dose rate for 10 MU delivered. This indicates that the dose rate change has no effect on nonlinearity of cine mode response. Except nonlinearity, cine mode is well matched to integrated mode response within 2% for field size, phantom thickness, MLC speed dependences. Conclusion: Cine mode has similar dosimetric characteristics to integrated mode with open and IMRT fields, and the main limitation with cine mode is missing images. Therefore, the calibration of EPID images with this mode should be run with large MU, and when IMRT verification field has low MU, the correction for missing images are required.« less
NASA Astrophysics Data System (ADS)
Tobias, B.; Domier, C. W.; Luhmann, N. C.; Luo, C.; Mamidanna, M.; Phan, T.; Pham, A.-V.; Wang, Y.
2016-11-01
The critical component enabling electron cyclotron emission imaging (ECEI) and microwave imaging reflectometry (MIR) to resolve 2D and 3D electron temperature and density perturbations is the heterodyne imaging array that collects and downconverts radiated emission and/or reflected signals (50-150 GHz) to an intermediate frequency (IF) band (e.g. 0.1-18 GHz) that can be transmitted by a shielded coaxial cable for further filtering and detection. New circuitry has been developed for this task, integrating gallium arsenide (GaAs) monolithic microwave integrated circuits (MMICs) mounted on a liquid crystal polymer (LCP) substrate. The improved topology significantly increases electromagnetic shielding from out-of-band interference, leads to 10× improvement in the signal-to-noise ratio, and dramatic cost savings through integration. The current design, optimized for reflectometry and edge radiometry on mid-sized tokamaks, has demonstrated >20 dB conversion gain in upper V-band (60-75 GHz). Implementation of the circuit in a multi-channel electron cyclotron emission imaging (ECEI) array will improve the diagnosis of edge-localized modes and fluctuations of the high-confinement, or H-mode, pedestal.
NASA Astrophysics Data System (ADS)
Cui, Huizhong; Yang, Xinmai
2011-03-01
In this study, we applied an integrated photoacoustic imaging (PAI) and high intensity focused ultrasound (HIFU) system to noninvasively monitor the thermal damage due to HIFU ablation in vivo. A single-element, spherically focused ultrasonic transducer, with a central frequency of 5MHz, was used to generate a HIFU area in soft tissue. Photoacoustic signals were detected by the same ultrasonic transducer before and after HIFU treatments using different wavelengths. The feasibility of combined contrast imaging and treatment of solid tumor in vivo by the integrated PAI and HIFU system was also studied. Gold nanorods were used to enhance PAI during the imaging of a CT26 tumor, which was subcutaneously inoculated on the hip of a BALB/c mouse. Subsequently, the CT26 tumor was ablated by HIFU with the guidance of photoacoustic images. Our results suggested that the tumor was clearly visible on photoacoustic images after the injection of gold nanorods and was ablated by HIFU. In conclusion, PAI may potentially be used for monitoring HIFU thermal lesions with possible diagnosis and treatment of solid tumors.
Chakkarapani, Suresh Kumar; Sun, Yucheng; Lee, Seungah; Fang, Ning; Kang, Seong Ho
2018-05-22
Three-dimensional (3D) orientations of individual anisotropic plasmonic nanoparticles in aggregates were observed in real time by integrated light sheet super-resolution microscopy ( iLSRM). Asymmetric light scattering of a gold nanorod (AuNR) was used to trigger signals based on the polarizer angle. Controlled photoswitching was achieved by turning the polarizer and obtaining a series of images at different polarization directions. 3D subdiffraction-limited super-resolution images were obtained by superlocalization of scattering signals as a function of the anisotropic optical properties of AuNRs. Varying the polarizer angle allowed resolution of the orientation of individual AuNRs. 3D images of individual nanoparticles were resolved in aggregated regions, resulting in as low as 64 nm axial resolution and 28 nm spatial resolution. The proposed imaging setup and localization approach demonstrates a convenient method for imaging under a noisy environment where the majority of scattering noise comes from cellular components. This integrated 3D iLSRM and localization technique was shown to be reliable and useful in the field of 3D nonfluorescence super-resolution imaging.
Real-time blood flow visualization using the graphics processing unit
NASA Astrophysics Data System (ADS)
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ~10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark.
Real-time blood flow visualization using the graphics processing unit
Yang, Owen; Cuccia, David; Choi, Bernard
2011-01-01
Laser speckle imaging (LSI) is a technique in which coherent light incident on a surface produces a reflected speckle pattern that is related to the underlying movement of optical scatterers, such as red blood cells, indicating blood flow. Image-processing algorithms can be applied to produce speckle flow index (SFI) maps of relative blood flow. We present a novel algorithm that employs the NVIDIA Compute Unified Device Architecture (CUDA) platform to perform laser speckle image processing on the graphics processing unit. Software written in C was integrated with CUDA and integrated into a LabVIEW Virtual Instrument (VI) that is interfaced with a monochrome CCD camera able to acquire high-resolution raw speckle images at nearly 10 fps. With the CUDA code integrated into the LabVIEW VI, the processing and display of SFI images were performed also at ∼10 fps. We present three video examples depicting real-time flow imaging during a reactive hyperemia maneuver, with fluid flow through an in vitro phantom, and a demonstration of real-time LSI during laser surgery of a port wine stain birthmark. PMID:21280915
Tobias, B; Domier, C W; Luhmann, N C; Luo, C; Mamidanna, M; Phan, T; Pham, A-V; Wang, Y
2016-11-01
The critical component enabling electron cyclotron emission imaging (ECEI) and microwave imaging reflectometry (MIR) to resolve 2D and 3D electron temperature and density perturbations is the heterodyne imaging array that collects and downconverts radiated emission and/or reflected signals (50-150 GHz) to an intermediate frequency (IF) band (e.g. 0.1-18 GHz) that can be transmitted by a shielded coaxial cable for further filtering and detection. New circuitry has been developed for this task, integrating gallium arsenide (GaAs) monolithic microwave integrated circuits (MMICs) mounted on a liquid crystal polymer (LCP) substrate. The improved topology significantly increases electromagnetic shielding from out-of-band interference, leads to 10× improvement in the signal-to-noise ratio, and dramatic cost savings through integration. The current design, optimized for reflectometry and edge radiometry on mid-sized tokamaks, has demonstrated >20 dB conversion gain in upper V-band (60-75 GHz). Implementation of the circuit in a multi-channel electron cyclotron emission imaging (ECEI) array will improve the diagnosis of edge-localized modes and fluctuations of the high-confinement, or H-mode, pedestal.
A New Optical Design for Imaging Spectroscopy
NASA Astrophysics Data System (ADS)
Thompson, K. L.
2002-05-01
We present an optical design concept for imaging spectroscopy, with some advantages over current systems. The system projects monochromatic images onto the 2-D array detector(s). Faint object and crowded field spectroscopy can be reduced first using image processing techniques, then building the spectrum, unlike integral field units where one must first extract the spectra, build data cubes from these, then reconstruct the target's integrated spectral flux. Like integral field units, all photons are detected simultaneously, unlike tunable filters which must be scanned through the wavelength range of interest and therefore pay a sensitivity pentalty. Several sample designs are presented, including an instrument optimized for measuring intermediate redshift galaxy cluster velocity dispersions, one designed for near-infrared ground-based adaptive optics, and one intended for space-based rapid follow-up of transient point sources such as supernovae and gamma ray bursts.
Standards to support information systems integration in anatomic pathology.
Daniel, Christel; García Rojo, Marcial; Bourquard, Karima; Henin, Dominique; Schrader, Thomas; Della Mea, Vincenzo; Gilbertson, John; Beckwith, Bruce A
2009-11-01
Integrating anatomic pathology information- text and images-into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. The IHE anatomic pathology technical framework describes a first integration profile, "Anatomic Pathology Workflow," dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).
Yoshida, Soichiro; Kihara, Kazunori; Takeshita, Hideki; Fujii, Yasuhisa
2014-12-01
The head-mounted display (HMD) is a new image monitoring system. We developed the Personal Integrated-image Monitoring System (PIM System) using the HMD (HMZ-T2, Sony Corporation, Tokyo, Japan) in combination with video splitters and multiplexers as a surgical guide system for transurethral resection of the prostate (TURP). The imaging information obtained from the cystoscope, the transurethral ultrasonography (TRUS), the video camera attached to the HMD, and the patient's vital signs monitor were split and integrated by the PIM System and a composite image was displayed by the HMD using a four-split screen technique. Wearing the HMD, the lead surgeon and the assistant could simultaneously and continuously monitor the same information displayed by the HMD in an ergonomically efficient posture. Each participant could independently rearrange the images comprising the composite image depending on the engaging step. Two benign prostatic hyperplasia (BPH) patients underwent TURP performed by surgeons guided with this system. In both cases, the TURP procedure was successfully performed, and their postoperative clinical courses had no remarkable unfavorable events. During the procedure, none of the participants experienced any HMD-wear related adverse effects or reported any discomfort.
Two-dimensional phase unwrapping using robust derivative estimation and adaptive integration.
Strand, Jarle; Taxt, Torfinn
2002-01-01
The adaptive integration (ADI) method for two-dimensional (2-D) phase unwrapping is presented. The method uses an algorithm for noise robust estimation of partial derivatives, followed by a noise robust adaptive integration process. The ADI method can easily unwrap phase images with moderate noise levels, and the resulting images are congruent modulo 2pi with the observed, wrapped, input images. In a quantitative evaluation, both the ADI and the BLS methods (Strand et al.) were better than the least-squares methods of Ghiglia and Romero (GR), and of Marroquin and Rivera (MRM). In a qualitative evaluation, the ADI, the BLS, and a conjugate gradient version of the MRM method (MRMCG), were all compared using a synthetic image with shear, using 115 magnetic resonance images, and using 22 fiber-optic interferometry images. For the synthetic image and the interferometry images, the ADI method gave consistently visually better results than the other methods. For the MR images, the MRMCG method was best, and the ADI method second best. The ADI method was less sensitive to the mask definition and the block size than the BLS method, and successfully unwrapped images with shears that were not marked in the masks. The computational requirements of the ADI method for images of nonrectangular objects were comparable to only two iterations of many least-squares-based methods (e.g., GR). We believe the ADI method provides a powerful addition to the ensemble of tools available for 2-D phase unwrapping.
Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro
2012-09-10
We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.
Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...
2016-11-28
Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less
An integrated skin marking tool for use with optical coherence tomography (OCT)
NASA Astrophysics Data System (ADS)
Patalay, Rakesh; Craythorne, Emma; Mallipeddi, Raj; Coleman, Andrew
2017-02-01
Optical coherence tomography (OCT) has been shown to provide clinically valuable images that can aid in the assessment of the pre-surgical margin in basal cell carcinoma (BCC). The accuracy and speed with which these images can be used to help delineate margins in the clinic are currently constrained by the need to suspend imaging whilst a pen is used to mark the skin. This constraint has been circumvented here by the design of a trigger-activated ink-loaded nib integrated with the OCT probe. The adapted OCT probe enables a mark to be placed on the skin precisely where a region of interest can be seen in the OCT images, accurately and reproducibly. The adapted probe is described and a comparison of its performance and early experience of its clinical use are reported here. Initial results indicate that the integrated skin marking probe makes margin delineation under OCT image-guidance faster, more accurate and more clinically acceptable.
The integrated design and archive of space-borne signal processing and compression coding
NASA Astrophysics Data System (ADS)
He, Qiang-min; Su, Hao-hang; Wu, Wen-bo
2017-10-01
With the increasing demand of users for the extraction of remote sensing image information, it is very urgent to significantly enhance the whole system's imaging quality and imaging ability by using the integrated design to achieve its compact structure, light quality and higher attitude maneuver ability. At this present stage, the remote sensing camera's video signal processing unit and image compression and coding unit are distributed in different devices. The volume, weight and consumption of these two units is relatively large, which unable to meet the requirements of the high mobility remote sensing camera. This paper according to the high mobility remote sensing camera's technical requirements, designs a kind of space-borne integrated signal processing and compression circuit by researching a variety of technologies, such as the high speed and high density analog-digital mixed PCB design, the embedded DSP technology and the image compression technology based on the special-purpose chips. This circuit lays a solid foundation for the research of the high mobility remote sensing camera.
Develop Direct Geo-referencing System Based on Open Source Software and Hardware Platform
NASA Astrophysics Data System (ADS)
Liu, H. S.; Liao, H. M.
2015-08-01
Direct geo-referencing system uses the technology of remote sensing to quickly grasp images, GPS tracks, and camera position. These data allows the construction of large volumes of images with geographic coordinates. So that users can be measured directly on the images. In order to properly calculate positioning, all the sensor signals must be synchronized. Traditional aerial photography use Position and Orientation System (POS) to integrate image, coordinates and camera position. However, it is very expensive. And users could not use the result immediately because the position information does not embed into image. To considerations of economy and efficiency, this study aims to develop a direct geo-referencing system based on open source software and hardware platform. After using Arduino microcontroller board to integrate the signals, we then can calculate positioning with open source software OpenCV. In the end, we use open source panorama browser, panini, and integrate all these to open source GIS software, Quantum GIS. A wholesome collection of data - a data processing system could be constructed.
NASA Astrophysics Data System (ADS)
Hodgkin, Van A.
2015-05-01
Most mass-produced, commercially available and fielded military reflective imaging systems operate across broad swaths of the visible, near infrared (NIR), and shortwave infrared (SWIR) wavebands without any spectral selectivity within those wavebands. In applications that employ these systems, it is not uncommon to be imaging a scene in which the image contrasts between the objects of interest, i.e., the targets, and the objects of little or no interest, i.e., the backgrounds, are sufficiently low to make target discrimination difficult or uncertain. This can occur even when the spectral distribution of the target and background reflectivity across the given waveband differ significantly from each other, because the fundamental components of broadband image contrast are the spectral integrals of the target and background signatures. Spectral integration by the detectors tends to smooth out any differences. Hyperspectral imaging is one approach to preserving, and thus highlighting, spectral differences across the scene, even when the waveband integrated signatures would be about the same, but it is an expensive, complex, noncompact, and untimely solution. This paper documents a study of how the capability to selectively customize the spectral width and center wavelength with a hypothetical tunable fore-optic filter would allow a broadband reflective imaging sensor to optimize image contrast as a function of scene content and ambient illumination.
Suh, Sungho; Itoh, Shinya; Aoyama, Satoshi; Kawahito, Shoji
2010-01-01
For low-noise complementary metal-oxide-semiconductor (CMOS) image sensors, the reduction of pixel source follower noises is becoming very important. Column-parallel high-gain readout circuits are useful for low-noise CMOS image sensors. This paper presents column-parallel high-gain signal readout circuits, correlated multiple sampling (CMS) circuits and their noise reduction effects. In the CMS, the gain of the noise cancelling is controlled by the number of samplings. It has a similar effect to that of an amplified CDS for the thermal noise but is a little more effective for 1/f and RTS noises. Two types of the CMS with simple integration and folding integration are proposed. In the folding integration, the output signal swing is suppressed by a negative feedback using a comparator and one-bit D-to-A converter. The CMS circuit using the folding integration technique allows to realize a very low-noise level while maintaining a wide dynamic range. The noise reduction effects of their circuits have been investigated with a noise analysis and an implementation of a 1Mpixel pinned photodiode CMOS image sensor. Using 16 samplings, dynamic range of 59.4 dB and noise level of 1.9 e(-) for the simple integration CMS and 75 dB and 2.2 e(-) for the folding integration CMS, respectively, are obtained.
Rieckmann, Anna; Hedden, Trey; Younger, Alayna P; Sperling, Reisa A; Johnson, Keith A; Buckner, Randy L
2016-02-01
Aging-related differences in white matter integrity, the presence of amyloid plaques, and density of biomarkers indicative of dopamine functions can be detected and quantified with in vivo human imaging. The primary aim of the present study was to investigate whether these imaging-based measures constitute independent imaging biomarkers in older adults, which would speak to the hypothesis that the aging brain is characterized by multiple independent neurobiological cascades. We assessed MRI-based markers of white matter integrity and PET-based marker of dopamine transporter density and amyloid deposition in the same set of 53 clinically normal individuals (age 65-87). A multiple regression analysis demonstrated that dopamine transporter availability is predicted by white matter integrity, which was detectable even after controlling for chronological age. Further post-hoc exploration revealed that dopamine transporter availability was further associated with systolic blood pressure, mirroring the established association between cardiovascular health and white matter integrity. Dopamine transporter availability was not associated with the presence of amyloid burden. Neurobiological correlates of dopamine transporter measures in aging are therefore likely unrelated to Alzheimer's disease but are aligned with white matter integrity and cardiovascular risk. More generally, these results suggest that two common imaging markers of the aging brain that are typically investigated separately do not reflect independent neurobiological processes. Hum Brain Mapp 37:621-631, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Integrating IR detector imaging systems
NASA Technical Reports Server (NTRS)
Bailey, G. C. (Inventor)
1984-01-01
An integrating IR detector array for imaging is provided in a hybrid circuit with InSb mesa diodes in a linear array, a single J-FET preamplifier for readout, and a silicon integrated circuit multiplexer. Thin film conductors in a fan out pattern deposited on an Al2O3 substrate connect the diodes to the multiplexer, and thick film conductors also connect the reset switch and preamplifier to the multiplexer. Two phase clock pulses are applied with a logic return signal to the multiplexer through triax comprised of three thin film conductors deposited between layers. A lens focuses a scanned image onto the diode array for horizontal read out while a scanning mirror provides vertical scan.
Active pixel image sensor with a winner-take-all mode of operation
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Mead, Carver (Inventor); Fossum, Eric R. (Inventor)
2003-01-01
An integrated CMOS semiconductor imaging device having two modes of operation that can be performed simultaneously to produce an output image and provide information of a brightest or darkest pixel in the image.
Multi-spectral imaging with infrared sensitive organic light emitting diode
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-01-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions. PMID:25091589
Multi-spectral imaging with infrared sensitive organic light emitting diode
NASA Astrophysics Data System (ADS)
Kim, Do Young; Lai, Tzung-Han; Lee, Jae Woong; Manders, Jesse R.; So, Franky
2014-08-01
Commercially available near-infrared (IR) imagers are fabricated by integrating expensive epitaxial grown III-V compound semiconductor sensors with Si-based readout integrated circuits (ROIC) by indium bump bonding which significantly increases the fabrication costs of these image sensors. Furthermore, these typical III-V compound semiconductors are not sensitive to the visible region and thus cannot be used for multi-spectral (visible to near-IR) sensing. Here, a low cost infrared (IR) imaging camera is demonstrated with a commercially available digital single-lens reflex (DSLR) camera and an IR sensitive organic light emitting diode (IR-OLED). With an IR-OLED, IR images at a wavelength of 1.2 µm are directly converted to visible images which are then recorded in a Si-CMOS DSLR camera. This multi-spectral imaging system is capable of capturing images at wavelengths in the near-infrared as well as visible regions.
NASA Astrophysics Data System (ADS)
Serrels, K. A.; Ramsay, E.; Reid, D. T.
2009-02-01
We present experimental evidence for the resolution-enhancing effect of an annular pupil-plane aperture when performing nonlinear imaging in the vectorial-focusing regime through manipulation of the focal spot geometry. By acquiring two-photon optical beam-induced current images of a silicon integrated-circuit using solid-immersion-lens microscopy at 1550 nm we achieved 70 nm resolution. This result demonstrates a reduction in the minimum effective focal spot diameter of 36%. In addition, the annular-aperture-induced extension of the depth-of-focus causes an observable decrease in the depth contrast of the resulting image and we explain the origins of this using a simulation of the imaging process.
Integrated approach to ischemic heart disease. The one-stop shop.
Kramer, C M
1998-05-01
Magnetic resonance imaging is unique in its variety of applications for imaging the cardiovascular system. A thorough assessment of myocardial structure, function, and perfusion; assessment of coronary artery anatomy and flow; and spectroscopic evaluation of cardiac energetics can be readily performed by magnetic resonance imaging. One key to the advancement of cardiac magnetic resonance imaging as a clinical tool in the evaluation, the so called one stop shop. Improvements in magnetic resonance hardware, software, and imaging speed now permit this integrated examination. Cardiac magnetic resonance is a powerful technique with the potential to replace or complement other commonly used techniques in the diagnostic armamentarium of physicians caring for patients with ischemic heart disease.
NASA Astrophysics Data System (ADS)
Migiyama, Go; Sugimura, Atsuhiko; Osa, Atsushi; Miike, Hidetoshi
Recently, digital cameras are offering technical advantages rapidly. However, the shot image is different from the sight image generated when that scenery is seen with the naked eye. There are blown-out highlights and crushed blacks in the image that photographed the scenery of wide dynamic range. The problems are hardly generated in the sight image. These are contributory cause of difference between the shot image and the sight image. Blown-out highlights and crushed blacks are caused by the difference of dynamic range between the image sensor installed in a digital camera such as CCD and CMOS and the human visual system. Dynamic range of the shot image is narrower than dynamic range of the sight image. In order to solve the problem, we propose an automatic method to decide an effective exposure range in superposition of edges. We integrate multi-step exposure images using the method. In addition, we try to erase pseudo-edges using the process to blend exposure values. Afterwards, we get a pseudo wide dynamic range image automatically.
Validating a Geographical Image Retrieval System.
ERIC Educational Resources Information Center
Zhu, Bin; Chen, Hsinchun
2000-01-01
Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…
NASA Astrophysics Data System (ADS)
Fang, Yi-Chin; Wu, Bo-Wen; Lin, Wei-Tang; Jon, Jen-Liung
2007-11-01
Resolution and color are two main directions for measuring optical digital image, but it will be a hard work to integral improve the image quality of optical system, because there are many limits such as size, materials and environment of optical system design. Therefore, it is important to let blurred images as aberrations and noises or due to the characteristics of human vision as far distance and small targets to raise the capability of image recognition with artificial intelligence such as genetic algorithm and neural network in the condition that decreasing color aberration of optical system and not to increase complex calculation in the image processes. This study could achieve the goal of integral, economically and effectively to improve recognition and classification in low quality image from optical system and environment.
Development of imaging biomarkers and generation of big data.
Alberich-Bayarri, Ángel; Hernández-Navarro, Rafael; Ruiz-Martínez, Enrique; García-Castro, Fabio; García-Juan, David; Martí-Bonmatí, Luis
2017-06-01
Several image processing algorithms have emerged to cover unmet clinical needs but their application to radiological routine with a clear clinical impact is still not straightforward. Moving from local to big infrastructures, such as Medical Imaging Biobanks (millions of studies), or even more, Federations of Medical Imaging Biobanks (in some cases totaling to hundreds of millions of studies) require the integration of automated pipelines for fast analysis of pooled data to extract clinically relevant conclusions, not uniquely linked to medical imaging, but in combination to other information such as genetic profiling. A general strategy for the development of imaging biomarkers and their integration in the cloud for the quantitative management and exploitation in large databases is herein presented. The proposed platform has been successfully launched and is being validated nowadays among the early adopters' community of radiologists, clinicians, and medical imaging researchers.
NASA Technical Reports Server (NTRS)
Knox, R. M.; Toulios, P. P.; Onoda, G. Y.
1972-01-01
Program results are described in which the use of a/high permittivity rectangular dielectric image waveguide has been investigated for use in microwave and millimeter wavelength circuits. Launchers from rectangular metal waveguide to image waveguide are described. Theoretical and experimental evaluations of the radiation from curved image waveguides are given. Measurements of attenuation due to conductor and dielectric losses, adhesives, and gaps between the dielectric waveguide and the image plane are included. Various passive components are described and evaluations given. Investigations of various techniques for fabrication of image waveguide circuits using ceramic waveguides are also presented. Program results support the evaluation of the image line approach as an advantageous method for realizing low loss integrated electronic circuits for X-band and above.
Integrated Dual Imaging Detector
NASA Technical Reports Server (NTRS)
Rust, David M.
1999-01-01
A new type of image detector was designed to simultaneously analyze the polarization of light at all picture elements in a scene. The integrated Dual Imaging detector (IDID) consists of a lenslet array and a polarizing beamsplitter bonded to a commercial charge coupled device (CCD). The IDID simplifies the design and operation of solar vector magnetographs and the imaging polarimeters and spectroscopic imagers used, for example, in atmosphere and solar research. When used in a solar telescope, the vector magnetic fields on the solar surface. Other applications include environmental monitoring, robot vision, and medical diagnoses (through the eye). Innovations in the IDID include (1) two interleaved imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 5) electrons per pixel); (3) simultaneous readout and display of both images; and (4) laptop computer signal processing to produce polarization maps in field situations.
A Web simulation of medical image reconstruction and processing as an educational tool.
Papamichail, Dimitrios; Pantelis, Evaggelos; Papagiannis, Panagiotis; Karaiskos, Pantelis; Georgiou, Evangelos
2015-02-01
Web educational resources integrating interactive simulation tools provide students with an in-depth understanding of the medical imaging process. The aim of this work was the development of a purely Web-based, open access, interactive application, as an ancillary learning tool in graduate and postgraduate medical imaging education, including a systematic evaluation of learning effectiveness. The pedagogic content of the educational Web portal was designed to cover the basic concepts of medical imaging reconstruction and processing, through the use of active learning and motivation, including learning simulations that closely resemble actual tomographic imaging systems. The user can implement image reconstruction and processing algorithms under a single user interface and manipulate various factors to understand the impact on image appearance. A questionnaire for pre- and post-training self-assessment was developed and integrated in the online application. The developed Web-based educational application introduces the trainee in the basic concepts of imaging through textual and graphical information and proceeds with a learning-by-doing approach. Trainees are encouraged to participate in a pre- and post-training questionnaire to assess their knowledge gain. An initial feedback from a group of graduate medical students showed that the developed course was considered as effective and well structured. An e-learning application on medical imaging integrating interactive simulation tools was developed and assessed in our institution.
Enterprise-scale image distribution with a Web PACS.
Gropper, A; Doyle, S; Dreyer, K
1998-08-01
The integration of images with existing and new health care information systems poses a number of challenges in a multi-facility network: image distribution to clinicians; making DICOM image headers consistent across information systems; and integration of teleradiology into PACS. A novel, Web-based enterprise PACS architecture introduced at Massachusetts General Hospital provides a solution. Four AMICAS Web/Intranet Image Servers were installed as the default DICOM destination of 10 digital modalities. A fifth AMICAS receives teleradiology studies via the Internet. Each AMICAS includes: a Java-based interface to the IDXrad radiology information system (RIS), a DICOM autorouter to tape-library archives and to the Agfa PACS, a wavelet image compressor/decompressor that preserves compatibility with DICOM workstations, a Web server to distribute images throughout the enterprise, and an extensible interface which permits links between other HIS and AMICAS. Using wavelet compression and Internet standards as its native formats, AMICAS creates a bridge to the DICOM networks of remote imaging centers via the Internet. This teleradiology capability is integrated into the DICOM network and the PACS thereby eliminating the need for special teleradiology workstations. AMICAS has been installed at MGH since March of 1997. During that time, it has been a reliable component of the evolving digital image distribution system. As a result, the recently renovated neurosurgical ICU will be filmless and use only AMICAS workstations for mission-critical patient care.
SAVA 3: A testbed for integration and control of visual processes
NASA Technical Reports Server (NTRS)
Crowley, James L.; Christensen, Henrik
1994-01-01
The development of an experimental test-bed to investigate the integration and control of perception in a continuously operating vision system is described. The test-bed integrates a 12 axis robotic stereo camera head mounted on a mobile robot, dedicated computer boards for real-time image acquisition and processing, and a distributed system for image description. The architecture was designed to: (1) be continuously operating, (2) integrate software contributions from geographically dispersed laboratories, (3) integrate description of the environment with 2D measurements, 3D models, and recognition of objects, (4) capable of supporting diverse experiments in gaze control, visual servoing, navigation, and object surveillance, and (5) dynamically reconfiguarable.
Testing for a Signal with Unknown Location and Scale in a Stationary Gaussian Random Field
1994-01-07
Secondary 60D05, 52A22. Key words and phrases. Euler characteristic, integral geometry, image analysis , Gaussian fields, volume of tubes. SUMMARY We...words and phrases. Euler characteristic, integral geometry. image analysis . Gaussian fields. volume of tubes. 20. AMST RACT (Coith..o an revmreo ef* It
Method and apparatus of high dynamic range image sensor with individual pixel reset
NASA Technical Reports Server (NTRS)
Yadid-Pecht, Orly (Inventor); Pain, Bedabrata (Inventor); Fossum, Eric R. (Inventor)
2001-01-01
A wide dynamic range image sensor provides individual pixel reset to vary the integration time of individual pixels. The integration time of each pixel is controlled by column and row reset control signals which activate a logical reset transistor only when both signals coincide for a given pixel.
High-resolution non-destructive three-dimensional imaging of integrated circuits
NASA Astrophysics Data System (ADS)
Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H. R.; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel
2017-03-01
Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography—a high-resolution coherent diffractive imaging technique—can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.
Microscope-Integrated OCT Feasibility and Utility With the EnFocus System in the DISCOVER Study.
Runkle, Anne; Srivastava, Sunil K; Ehlers, Justis P
2017-03-01
To evaluate the feasibility and utility of a novel microscope-integrated intraoperative optical coherence tomography (OCT) system. The DISCOVER study is an investigational device study evaluating microscope-integrated intraoperative OCT systems for ophthalmic surgery. This report focuses on subjects imaged with the EnFocus prototype system (Leica Microsystems/Bioptigen, Morrisville, NC). OCT was performed at surgeon-directed milestones. Surgeons completed a questionnaire after each case to evaluate the impact of OCT on intraoperative management. Fifty eyes underwent imaging with the EnFocus system. Successful imaging was obtained in 46 of 50 eyes (92%). In eight cases (16%), surgical management was changed based on intraoperative OCT findings. In membrane peeling procedures, intraoperative OCT findings were discordant from the surgeon's initial impression in seven of 20 cases (35%). This study demonstrates the feasibility of microscope-integrated intraoperative OCT using the Bioptigen EnFocus system. Intraoperative OCT may provide surgeons with additional information that may influence surgical decision-making. [Ophthalmic Surg Lasers Imaging Retina. 2017;48:216-222.]. Copyright 2017, SLACK Incorporated.
High-resolution non-destructive three-dimensional imaging of integrated circuits.
Holler, Mirko; Guizar-Sicairos, Manuel; Tsai, Esther H R; Dinapoli, Roberto; Müller, Elisabeth; Bunk, Oliver; Raabe, Jörg; Aeppli, Gabriel
2017-03-15
Modern nanoelectronics has advanced to a point at which it is impossible to image entire devices and their interconnections non-destructively because of their small feature sizes and the complex three-dimensional structures resulting from their integration on a chip. This metrology gap implies a lack of direct feedback between design and manufacturing processes, and hampers quality control during production, shipment and use. Here we demonstrate that X-ray ptychography-a high-resolution coherent diffractive imaging technique-can create three-dimensional images of integrated circuits of known and unknown designs with a lateral resolution in all directions down to 14.6 nanometres. We obtained detailed device geometries and corresponding elemental maps, and show how the devices are integrated with each other to form the chip. Our experiments represent a major advance in chip inspection and reverse engineering over the traditional destructive electron microscopy and ion milling techniques. Foreseeable developments in X-ray sources, optics and detectors, as well as adoption of an instrument geometry optimized for planar rather than cylindrical samples, could lead to a thousand-fold increase in efficiency, with concomitant reductions in scan times and voxel sizes.
NASA Astrophysics Data System (ADS)
Scaduto, David A.; Lubinsky, Anthony R.; Rowlands, John A.; Kenmotsu, Hidenori; Nishimoto, Norihito; Nishino, Takeshi; Tanioka, Kenkichi; Zhao, Wei
2014-03-01
We have previously proposed SAPHIRE (scintillator avalanche photoconductor with high resolution emitter readout), a novel detector concept with potentially superior spatial resolution and low-dose performance compared with existing flat-panel imagers. The detector comprises a scintillator that is optically coupled to an amorphous selenium photoconductor operated with avalanche gain, known as high-gain avalanche rushing photoconductor (HARP). High resolution electron beam readout is achieved using a field emitter array (FEA). This combination of avalanche gain, allowing for very low-dose imaging, and electron emitter readout, providing high spatial resolution, offers potentially superior image quality compared with existing flat-panel imagers, with specific applications to fluoroscopy and breast imaging. Through the present collaboration, a prototype HARP sensor with integrated electrostatic focusing and nano- Spindt FEA readout technology has been fabricated. The integrated electron-optic focusing approach is more suitable for fabricating large-area detectors. We investigate the dependence of spatial resolution on sensor structure and operating conditions, and compare the performance of electrostatic focusing with previous technologies. Our results show a clear dependence of spatial resolution on electrostatic focusing potential, with performance approaching that of the previous design with external mesh-electrode. Further, temporal performance (lag) of the detector is evaluated and the results show that the integrated electrostatic focusing design exhibits comparable or better performance compared with the mesh-electrode design. This study represents the first technical evaluation and characterization of the SAPHIRE concept with integrated electrostatic focusing.
NASA Astrophysics Data System (ADS)
Dogon-yaro, M. A.; Kumar, P.; Rahman, A. Abdul; Buyuksalih, G.
2016-10-01
Timely and accurate acquisition of information on the condition and structural changes of urban trees serves as a tool for decision makers to better appreciate urban ecosystems and their numerous values which are critical to building up strategies for sustainable development. The conventional techniques used for extracting tree features include; ground surveying and interpretation of the aerial photography. However, these techniques are associated with some constraint, such as labour intensive field work, a lot of financial requirement, influences by weather condition and topographical covers which can be overcome by means of integrated airborne based LiDAR and very high resolution digital image datasets. This study presented a semi-automated approach for extracting urban trees from integrated airborne based LIDAR and multispectral digital image datasets over Istanbul city of Turkey. The above scheme includes detection and extraction of shadow free vegetation features based on spectral properties of digital images using shadow index and NDVI techniques and automated extraction of 3D information about vegetation features from the integrated processing of shadow free vegetation image and LiDAR point cloud datasets. The ability of the developed algorithms shows a promising result as an automated and cost effective approach to estimating and delineated 3D information of urban trees. The research also proved that integrated datasets is a suitable technology and a viable source of information for city managers to be used in urban trees management.
Alam, Md Ashraful; Piao, Mei-Lan; Bang, Le Thanh; Kim, Nam
2013-10-01
Viewing-zone control of integral imaging (II) displays using a directional projection and elemental image (EI) resizing method is proposed. Directional projection of EIs with the same size of microlens pitch causes an EI mismatch at the EI plane. In this method, EIs are generated computationally using a newly introduced algorithm: the directional elemental image generation and resizing algorithm considering the directional projection geometry of each pixel as well as an EI resizing method to prevent the EI mismatch. Generated EIs are projected as a collimated projection beam with a predefined directional angle, either horizontally or vertically. The proposed II display system allows reconstruction of a 3D image within a predefined viewing zone that is determined by the directional projection angle.
Intershot Analysis of Flows in DIII-D
NASA Astrophysics Data System (ADS)
Meyer, W. H.; Allen, S. L.; Samuell, C. M.; Howard, J.
2016-10-01
Analysis of the DIII-D flow diagnostic data require demodulation of interference images, and inversion of the resultant line integrated emissivity and flow (phase) images. Four response matrices are pre-calculated: the emissivity line integral and the line integral of the scalar product of the lines-of-site with the orthogonal unit vectors of parallel flow. Equilibrium data determines the relative weight of the component matrices used in the final flow inversion matrix. Serial processing has been used for the lower divertor viewing flow camera 800x600 pixel image. The full cross section viewing camera will require parallel processing of the 2160x2560 pixel image. We will discuss using a Posix thread pool and a Tesla K40c GPU in the processing of this data. Prepared by LLNL under Contract DE-AC52-07NA27344. This material is based upon work supported by the U.S. DOE, Office of Science, Fusion Energy Sciences.
Huang, Yongyang; Badar, Mudabbir; Nitkowski, Arthur; Weinroth, Aaron; Tansu, Nelson; Zhou, Chao
2017-01-01
Space-division multiplexing optical coherence tomography (SDM-OCT) is a recently developed parallel OCT imaging method in order to achieve multi-fold speed improvement. However, the assembly of fiber optics components used in the first prototype system was labor-intensive and susceptible to errors. Here, we demonstrate a high-speed SDM-OCT system using an integrated photonic chip that can be reliably manufactured with high precisions and low per-unit cost. A three-layer cascade of 1 × 2 splitters was integrated in the photonic chip to split the incident light into 8 parallel imaging channels with ~3.7 mm optical delay in air between each channel. High-speed imaging (~1s/volume) of porcine eyes ex vivo and wide-field imaging (~18.0 × 14.3 mm2) of human fingers in vivo were demonstrated with the chip-based SDM-OCT system. PMID:28856055
James, Joseph; Murukeshan, Vadakke Matham; Woh, Lye Sun
2014-07-01
The structural and molecular heterogeneities of biological tissues demand the interrogation of the samples with multiple energy sources and provide visualization capabilities at varying spatial resolution and depth scales for obtaining complementary diagnostic information. A novel multi-modal imaging approach that uses optical and acoustic energies to perform photoacoustic, ultrasound and fluorescence imaging at multiple resolution scales from the tissue surface and depth is proposed in this paper. The system comprises of two distinct forms of hardware level integration so as to have an integrated imaging system under a single instrumentation set-up. The experimental studies show that the system is capable of mapping high resolution fluorescence signatures from the surface, optical absorption and acoustic heterogeneities along the depth (>2cm) of the tissue at multi-scale resolution (<1µm to <0.5mm).
Forman, Bruce H.; Eccles, Randy; Piggins, Judith; Raila, Wayne; Estey, Greg; Barnett, G. Octo
1990-01-01
We have developed a visually oriented, computer-controlled learning environment designed for use by students of gross anatomy. The goals of this module are to reinforce the concepts of organ relationships and topography by using computed axial tomographic (CAT) images accessed from a videodisc integrated with color graphics and to introduce students to cross-sectional radiographic anatomy. We chose to build the program around CAT scan images because they not only provide excellent structural detail but also offer an anatomic orientation (transverse) that complements that used in the dissection laboratory (basically a layer-by-layer, anterior-to-posterior, or coronal approach). Our system, built using a Microsoft Windows-386 based authoring environment which we designed and implemented, integrates text, video images, and graphics into a single screen display. The program allows both user browsing of information, facilitated by hypertext links, and didactic sessions including mini-quizzes for self-assessment.
[A new concept for integration of image databanks into a comprehensive patient documentation].
Schöll, E; Holm, J; Eggli, S
2001-05-01
Image processing and archiving are of increasing importance in the practice of modern medicine. Particularly due to the introduction of computer-based investigation methods, physicians are dealing with a wide variety of analogue and digital picture archives. On the other hand, clinical information is stored in various text-based information systems without integration of image components. The link between such traditional medical databases and picture archives is a prerequisite for efficient data management as well as for continuous quality control and medical education. At the Department of Orthopedic Surgery, University of Berne, a software program was developed to create a complete multimedia electronic patient record. The client-server system contains all patients' data, questionnaire-based quality control, and a digital picture archive. Different interfaces guarantee the integration into the hospital's data network. This article describes our experiences in the development and introduction of a comprehensive image archiving system at a large orthopedic center.
Using digital watermarking to enhance security in wireless medical image transmission.
Giakoumaki, Aggeliki; Perakis, Konstantinos; Banitsas, Konstantinos; Giokas, Konstantinos; Tachakra, Sapal; Koutsouris, Dimitris
2010-04-01
During the last few years, wireless networks have been increasingly used both inside hospitals and in patients' homes to transmit medical information. In general, wireless networks suffer from decreased security. However, digital watermarking can be used to secure medical information. In this study, we focused on combining wireless transmission and digital watermarking technologies to better secure the transmission of medical images within and outside the hospital. We utilized an integrated system comprising the wireless network and the digital watermarking module to conduct a series of tests. The test results were evaluated by medical consultants. They concluded that the images suffered no visible quality degradation and maintained their diagnostic integrity. The proposed integrated system presented reasonable stability, and its performance was comparable to that of a fixed network. This system can enhance security during the transmission of medical images through a wireless channel.
Mamede, Joao I.; Hope, Thomas J.
2016-01-01
Summary Live cell imaging is a valuable technique that allows the characterization of the dynamic processes of the HIV-1 life-cycle. Here, we present a method of production and imaging of dual-labeled HIV viral particles that allows the visualization of two events. Varying release of the intravirion fluid phase marker reveals virion fusion and the loss of the integrity of HIV viral cores with the use of live wide-field fluorescent microscopy. PMID:26714704
ERIC Educational Resources Information Center
Cutler, Kay M.; Moeller, Mary R.
2017-01-01
"In many ways, images are the vehicle of comprehension, thought, and action. We integrate parts of images, we remember images, we manipulate images." This quote from James E. Zull clarifies the rationale for a discussion protocol called Visual Thinking Strategies (VTS), in which teachers focus students' attention on an image and ask…
Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph
2016-05-01
To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Multiscale and multi-modality visualization of angiogenesis in a human breast cancer model
Cebulla, Jana; Kim, Eugene; Rhie, Kevin; Zhang, Jiangyang
2017-01-01
Angiogenesis in breast cancer helps fulfill the metabolic demands of the progressing tumor and plays a critical role in tumor metastasis. Therefore, various imaging modalities have been used to characterize tumor angiogenesis. While micro-CT (μCT) is a powerful tool for analyzing the tumor microvascular architecture at micron-scale resolution, magnetic resonance imaging (MRI) with its sub-millimeter resolution is useful for obtaining in vivo vascular data (e.g. tumor blood volume and vessel size index). However, integration of these microscopic and macroscopic angiogenesis data across spatial resolutions remains challenging. Here we demonstrate the feasibility of ‘multiscale’ angiogenesis imaging in a human breast cancer model, wherein we bridge the resolution gap between ex vivo μCT and in vivo MRI using intermediate resolution ex vivo MR microscopy (μMRI). To achieve this integration, we developed suitable vessel segmentation techniques for the ex vivo imaging data and co-registered the vascular data from all three imaging modalities. We showcase two applications of this multiscale, multi-modality imaging approach: (1) creation of co-registered maps of vascular volume from three independent imaging modalities, and (2) visualization of differences in tumor vasculature between viable and necrotic tumor regions by integrating μCT vascular data with tumor cellularity data obtained using diffusion-weighted MRI. Collectively, these results demonstrate the utility of ‘mesoscopic’ resolution μMRI for integrating macroscopic in vivo MRI data and microscopic μCT data. Although focused on the breast tumor xenograft vasculature, our imaging platform could be extended to include additional data types for a detailed characterization of the tumor microenvironment and computational systems biology applications. PMID:24719185
Alleva, Jessica M; Diedrichs, Phillippa C; Halliwell, Emma; Martijn, Carolien; Stuijfzand, Bobby G; Treneman-Evans, Georgia; Rumsey, Nichola
2018-06-01
Focusing on body functionality is a promising technique for improving women's body image. This study replicates prior research in a large novel sample, tests longer-term follow-up effects, and investigates underlying mechanisms of these effects (body complexity and body-self integration). British women (N = 261) aged 18-30 who wanted to improve their body image were randomised to Expand Your Horizon (three online body functionality writing exercises) or an active control. Trait body image was assessed at Pretest, Posttest, 1-week, and 1-month Follow-Up. To explore whether changes in body complexity and body-self integration 'buffer' the impact of negative body-related experiences, participants also completed beauty-ideal media exposure. Relative to the control, intervention participants experienced improved appearance satisfaction, functionality satisfaction, body appreciation, and body complexity at Posttest, and at both Follow-Ups. Neither body complexity nor body-self integration mediated intervention effects. Media exposure decreased state body satisfaction among intervention and control participants, but neither body complexity nor body-self integration moderated these effects. The findings underscore the value of focusing on body functionality for improving body image and show that effects persist one month post-intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.
DICOM image integration into an electronic medical record using thin viewing clients
NASA Astrophysics Data System (ADS)
Stewart, Brent K.; Langer, Steven G.; Taira, Ricky K.
1998-07-01
Purpose -- To integrate radiological DICOM images into our currently existing web-browsable Electronic Medical Record (MINDscape). Over the last five years the University of Washington has created a clinical data repository combining in a distributed relational database information from multiple departmental databases (MIND). A text-based view of this data called the Mini Medical Record (MMR) has been available for three years. MINDscape, unlike the text based MMR, provides a platform independent, web browser view of the MIND dataset that can easily be linked to other information resources on the network. We have now added the integration of radiological images into MINDscape through a DICOM webserver. Methods/New Work -- we have integrated a commercial webserver that acts as a DICOM Storage Class Provider to our, computed radiography (CR), computed tomography (CT), digital fluoroscopy (DF), magnetic resonance (MR) and ultrasound (US) scanning devices. These images can be accessed through CGI queries or by linking the image server database using ODBC or SQL gateways. This allows the use of dynamic HTML links to the images on the DICOM webserver from MINDscape, so that the radiology reports already resident in the MIND repository can be married with the associated images through the unique examination accession number generated by our Radiology Information System (RIS). The web browser plug-in used provides a wavelet decompression engine (up to 16-bits per pixel) and performs the following image manipulation functions: window/level, flip, invert, sort, rotate, zoom, cine-loop and save as JPEG. Results -- Radiological DICOM image sets (CR, CT, MR and US) are displayed with associated exam reports for referring physician and clinicians anywhere within the widespread academic medical center on PCs, Macs, X-terminals and Unix computers. This system is also being used for home teleradiology application. Conclusion -- Radiological DICOM images can be made available medical center wide to physicians quickly using low-cost and ubiquitous, thin client browsing technology and wavelet compression.
High Resolution Airborne Laser Scanning and Hyperspectral Imaging with a Small Uav Platform
NASA Astrophysics Data System (ADS)
Gallay, Michal; Eck, Christoph; Zgraggen, Carlo; Kaňuk, Ján; Dvorný, Eduard
2016-06-01
The capabilities of unmanned airborne systems (UAS) have become diverse with the recent development of lightweight remote sensing instruments. In this paper, we demonstrate our custom integration of the state-of-the-art technologies within an unmanned aerial platform capable of high-resolution and high-accuracy laser scanning, hyperspectral imaging, and photographic imaging. The technological solution comprises the latest development of a completely autonomous, unmanned helicopter by Aeroscout, the Scout B1-100 UAV helicopter. The helicopter is powered by a gasoline two-stroke engine and it allows for integrating 18 kg of a customized payload unit. The whole system is modular providing flexibility of payload options, which comprises the main advantage of the UAS. The UAS integrates two kinds of payloads which can be altered. Both payloads integrate a GPS/IMU with a dual GPS antenna configuration provided by OXTS for accurate navigation and position measurements during the data acquisition. The first payload comprises a VUX-1 laser scanner by RIEGL and a Sony A6000 E-Mount photo camera. The second payload for hyperspectral scanning integrates a push-broom imager AISA KESTREL 10 by SPECIM. The UAS was designed for research of various aspects of landscape dynamics (landslides, erosion, flooding, or phenology) in high spectral and spatial resolution.
Ehlers, Justis P.; Srivastava, Sunil K.; Feiler, Daniel; Noonan, Amanda I.; Rollins, Andrew M.; Tao, Yuankai K.
2014-01-01
Purpose To demonstrate key integrative advances in microscope-integrated intraoperative optical coherence tomography (iOCT) technology that will facilitate adoption and utilization during ophthalmic surgery. Methods We developed a second-generation prototype microscope-integrated iOCT system that interfaces directly with a standard ophthalmic surgical microscope. Novel features for improved design and functionality included improved profile and ergonomics, as well as a tunable lens system for optimized image quality and heads-up display (HUD) system for surgeon feedback. Novel material testing was performed for potential suitability for OCT-compatible instrumentation based on light scattering and transmission characteristics. Prototype surgical instruments were developed based on material testing and tested using the microscope-integrated iOCT system. Several surgical maneuvers were performed and imaged, and surgical motion visualization was evaluated with a unique scanning and image processing protocol. Results High-resolution images were successfully obtained with the microscope-integrated iOCT system with HUD feedback. Six semi-transparent materials were characterized to determine their attenuation coefficients and scatter density with an 830 nm OCT light source. Based on these optical properties, polycarbonate was selected as a material substrate for prototype instrument construction. A surgical pick, retinal forceps, and corneal needle were constructed with semi-transparent materials. Excellent visualization of both the underlying tissues and surgical instrument were achieved on OCT cross-section. Using model eyes, various surgical maneuvers were visualized, including membrane peeling, vessel manipulation, cannulation of the subretinal space, subretinal intraocular foreign body removal, and corneal penetration. Conclusions Significant iterative improvements in integrative technology related to iOCT and ophthalmic surgery are demonstrated. PMID:25141340
The Route to an Integrative Associative Memory Is Influenced by Emotion
Murray, Brendan D.; Kensinger, Elizabeth A.
2014-01-01
Though the hippocampus typically has been implicated in processes related to associative binding, special types of associations – such as those created by integrative mental imagery – may be supported by processes implemented in other medial temporal-lobe or sensory processing regions. Here, we investigated what neural mechanisms underlie the formation and subsequent retrieval of integrated mental images, and whether those mechanisms differ based on the emotionality of the integration (i.e., whether it contains an emotional item or not). Participants viewed pairs of words while undergoing a functional MRI scan. They were instructed to imagine the two items separately from one another (“non-integrative” study) or as a single, integrated mental image (“integrative” study). They provided ratings of how successful they were at generating vivid images that fit the instructions. They were then given a surprise associative recognition test, also while undergoing an fMRI scan. The cuneus showed parametric correspondence to increasing imagery success selectively during encoding and retrieval of emotional integrations, while the parahippocampal gyri and prefrontal cortices showed parametric correspondence during the encoding and retrieval of non-emotional integrations. Connectivity analysis revealed that selectively during negative integration, left amygdala activity was negatively correlated with frontal and hippocampal activity. These data indicate that individuals utilize two different neural routes for forming and retrieving integrations depending on their emotional content, and they suggest a potentially disruptive role for the amygdala on frontal and medial-temporal regions during negative integration. PMID:24427267
Integration of electro-anatomical and imaging data of the left ventricle: An evaluation framework.
Soto-Iglesias, David; Butakoff, Constantine; Andreu, David; Fernández-Armenta, Juan; Berruezo, Antonio; Camara, Oscar
2016-08-01
Integration of electrical and structural information for scar characterization in the left ventricle (LV) is a crucial step to better guide radio-frequency ablation therapies, which are usually performed in complex ventricular tachycardia (VT) cases. This integration requires finding a common representation where to map the electrical information from the electro-anatomical map (EAM) surfaces and tissue viability information from delay-enhancement magnetic resonance images (DE-MRI). However, the development of a consistent integration method is still an open problem due to the lack of a proper evaluation framework to assess its accuracy. In this paper we present both: (i) an evaluation framework to assess the accuracy of EAM and imaging integration strategies with simulated EAM data and a set of global and local measures; and (ii) a new integration methodology based on a planar disk representation where the LV surface meshes are quasi-conformally mapped (QCM) by flattening, allowing for simultaneous visualization and joint analysis of the multi-modal data. The developed evaluation framework was applied to estimate the accuracy of the QCM-based integration strategy on a benchmark dataset of 128 synthetically generated ground-truth cases presenting different scar configurations and EAM characteristics. The obtained results demonstrate a significant reduction in global overlap errors (50-100%) with respect to state-of-the-art integration techniques, also better preserving the local topology of small structures such as conduction channels in scars. Data from seventeen VT patients were also used to study the feasibility of the QCM technique in a clinical setting, consistently outperforming the alternative integration techniques in the presence of sparse and noisy clinical data. The proposed evaluation framework has allowed a rigorous comparison of different EAM and imaging data integration strategies, providing useful information to better guide clinical practice in complex cardiac interventions. Copyright © 2016 Elsevier B.V. All rights reserved.
Maesawa, Satoshi; Fujii, Masazumi; Nakahara, Norimoto; Watanabe, Tadashi; Saito, Kiyoshi; Kajita, Yasukazu; Nagatani, Tetsuya; Wakabayashi, Toshihiko; Yoshida, Jun
2009-08-01
Initial experiences are reviewed in an integrated operation theater equipped with an intraoperative high-field (1.5 T) magnetic resonance (MR) imager and neuro-navigation (BrainSUITE), to evaluate the indications and limitations. One hundred consecutive cases were treated, consisting of 38 gliomas, 49 other tumors, 11 cerebrovascular diseases, and 2 functional diseases. The feasibility and usefulness of the integrated theater were evaluated for individual diseases, focusing on whether intraoperative images (including diffusion tensor imaging) affected the surgical strategy. The extent of resection and outcomes in each histological category of brain tumors were examined. Intraoperative high-field MR imaging frequently affected or modified the surgical strategy in the glioma group (27/38 cases, 71.1%), but less in the other tumor group (13/49 cases, 26.5%). The surgical strategy was not modified in cerebrovascular or functional diseases, but the success of procedures and the absence of complications could be confirmed. In glioma surgery, subtotal or greater resection was achieved in 22 of the 31 patients (71%) excluding biopsies, and intraoperative images revealed tumor remnants resulting in the extension of resection in 21 of the 22 patients (95.4%), the highest rate of extension among all types of pathologies. The integrated neuro-navigation improved workflow. The best indication for intraoperative high-field MR imaging and integrated neuro-navigation is brain tumors, especially gliomas, and is supplementary in assuring quality in surgery for cerebrovascular or functional diseases. Immediate quality assurance is provided in several types of neurosurgical procedures.
Zheng, Qiang; Warner, Steven; Tasian, Gregory; Fan, Yong
2018-02-12
Automatic segmentation of kidneys in ultrasound (US) images remains a challenging task because of high speckle noise, low contrast, and large appearance variations of kidneys in US images. Because texture features may improve the US image segmentation performance, we propose a novel graph cuts method to segment kidney in US images by integrating image intensity information and texture feature maps. We develop a new graph cuts-based method to segment kidney US images by integrating original image intensity information and texture feature maps extracted using Gabor filters. To handle large appearance variation within kidney images and improve computational efficiency, we build a graph of image pixels close to kidney boundary instead of building a graph of the whole image. To make the kidney segmentation robust to weak boundaries, we adopt localized regional information to measure similarity between image pixels for computing edge weights to build the graph of image pixels. The localized graph is dynamically updated and the graph cuts-based segmentation iteratively progresses until convergence. Our method has been evaluated based on kidney US images of 85 subjects. The imaging data of 20 randomly selected subjects were used as training data to tune parameters of the image segmentation method, and the remaining data were used as testing data for validation. Experiment results demonstrated that the proposed method obtained promising segmentation results for bilateral kidneys (average Dice index = 0.9446, average mean distance = 2.2551, average specificity = 0.9971, average accuracy = 0.9919), better than other methods under comparison (P < .05, paired Wilcoxon rank sum tests). The proposed method achieved promising performance for segmenting kidneys in two-dimensional US images, better than segmentation methods built on any single channel of image information. This method will facilitate extraction of kidney characteristics that may predict important clinical outcomes such as progression of chronic kidney disease. Copyright © 2018 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
Wolfs, Esther; Holvoet, Bryan; Ordovas, Laura; Breuls, Natacha; Helsen, Nicky; Schönberger, Matthias; Raitano, Susanna; Struys, Tom; Vanbilloen, Bert; Casteels, Cindy; Sampaolesi, Maurilio; Van Laere, Koen; Lambrichts, Ivo; Verfaillie, Catherine M; Deroose, Christophe M
2017-10-01
Molecular imaging is indispensable for determining the fate and persistence of engrafted stem cells. Standard strategies for transgene induction involve the use of viral vectors prone to silencing and insertional mutagenesis or the use of nonhuman genes. Methods: We used zinc finger nucleases to induce stable expression of human imaging reporter genes into the safe-harbor locus adeno-associated virus integration site 1 in human embryonic stem cells. Plasmids were generated carrying reporter genes for fluorescence, bioluminescence imaging, and human PET reporter genes. Results: In vitro assays confirmed their functionality, and embryonic stem cells retained differentiation capacity. Teratoma formation assays were performed, and tumors were imaged over time with PET and bioluminescence imaging. Conclusion: This study demonstrates the application of genome editing for targeted integration of human imaging reporter genes in human embryonic stem cells for long-term molecular imaging. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.
Nanoparticles in Higher-Order Multimodal Imaging
NASA Astrophysics Data System (ADS)
Rieffel, James Ki
Imaging procedures are a cornerstone in our current medical infrastructure. In everything from screening, diagnostics, and treatment, medical imaging is perhaps our greatest tool in evaluating individual health. Recently, there has been tremendous increase in the development of multimodal systems that combine the strengths of complimentary imaging technologies to overcome their independent weaknesses. Clinically, this has manifested in the virtually universal manufacture of combined PET-CT scanners. With this push toward more integrated imaging, new contrast agents with multimodal functionality are needed. Nanoparticle-based systems are ideal candidates based on their unique size, properties, and diversity. In chapter 1, an extensive background on recent multimodal imaging agents capable of enhancing signal or contrast in three or more modalities is presented. Chapter 2 discusses the development and characterization of a nanoparticulate probe with hexamodal imaging functionality. It is my hope that the information contained in this thesis will demonstrate the many benefits of nanoparticles in multimodal imaging, and provide insight into the potential of fully integrated imaging.
Infrared and visible image fusion method based on saliency detection in sparse domain
NASA Astrophysics Data System (ADS)
Liu, C. H.; Qi, Y.; Ding, W. R.
2017-06-01
Infrared and visible image fusion is a key problem in the field of multi-sensor image fusion. To better preserve the significant information of the infrared and visible images in the final fused image, the saliency maps of the source images is introduced into the fusion procedure. Firstly, under the framework of the joint sparse representation (JSR) model, the global and local saliency maps of the source images are obtained based on sparse coefficients. Then, a saliency detection model is proposed, which combines the global and local saliency maps to generate an integrated saliency map. Finally, a weighted fusion algorithm based on the integrated saliency map is developed to achieve the fusion progress. The experimental results show that our method is superior to the state-of-the-art methods in terms of several universal quality evaluation indexes, as well as in the visual quality.
NASA Astrophysics Data System (ADS)
Tian, Chao; Zhang, Wei; Nguyen, Van Phuc; Huang, Ziyi; Wang, Xueding; Paulus, Yannis M.
2018-02-01
Current clinical available retinal imaging techniques have limitations, including limited depth of penetration or requirement for the invasive injection of exogenous contrast agents. Here, we developed a novel multimodal imaging system for high-speed, high-resolution retinal imaging of larger animals, such as rabbits. The system integrates three state-of-the-art imaging modalities, including photoacoustic microscopy (PAM), optical coherence tomography (OCT), and fluorescence microscopy (FM). In vivo experimental results of rabbit eyes show that the PAM is able to visualize laser-induced retinal burns and distinguish individual eye blood vessels using a laser exposure dose of 80 nJ, which is well below the American National Standards Institute (ANSI) safety limit 160 nJ. The OCT can discern different retinal layers and visualize laser burns and choroidal detachments. The novel multi-modal imaging platform holds great promise in ophthalmic imaging.
NASA Technical Reports Server (NTRS)
Nichols, D. A.
1982-01-01
The problem of data integration in oceanography is discussed. Recommendations are made for technique development and evaluation, understanding requirements, and packaging techniques for speed, efficiency and ease of use. The primary satellite sensors of interest to oceanography are summarized. It is concluded that imaging type sensors make image processing an important tool for oceanographic studies.
Formulation of coarse integral imaging and its applications
NASA Astrophysics Data System (ADS)
Kakeya, Hideki
2008-02-01
This paper formulates the notion of coarse integral imaging and applies it to practical designs of 3D displays for the purposes of robot teleoperation and automobile HUDs. 3D display technologies are demanded in the applications where real-time and precise depth perception is required, such as teleoperation of robot manipulators and HUDs for automobiles. 3D displays for these applications, however, have not been realized so far. In the conventional 3D display technologies, the eyes are usually induced to focus on the screen, which is not suitable for the above purposes. To overcome this problem the author adopts the coarse integral imaging system, where each component lens is large enough to cover pixels dozens of times more than the number of views. The merit of this system is that it can induce the viewer's focus on the planes of various depths by generating a real image or a virtual image off the screen. This system, however, has major disadvantages in the quality of image, which is caused by aberration of lenses and discontinuity at the joints of component lenses. In this paper the author proposes practical optical designs for 3D monitors for robot teleoperation and 3D HUDs for automobiles by overcoming the problems of aberration and discontinuity of images.
IHE profiles applied to regional PACS.
Fernandez-Bayó, Josep
2011-05-01
PACS has been widely adopted as an image storage solution that perfectly fits the radiology department workflow and that can be easily extended to other hospital departments. Integrations with other hospital systems, like the Radiology Information System, the Hospital Information System and the Electronic Patient Record are fully achieved but still challenging aims. PACS also creates the perfect environment for teleradiology and teleworking setups. One step further is the regional PACS concept where different hospitals or health care enterprises share the images in an integrated Electronic Patient Record. Among the different solutions available to share images between different hospitals IHE (Integrating the Healthcare Enterprise) organization presents the Cross Enterprise Document Sharing profile (XDS) which allows sharing images from different hospitals even if they have different PACS vendors. Adopting XDS has multiple advantages, images do not need to be duplicated in a central archive to be shared among the different healthcare enterprises, they only need to be indexed and published in a central document registry. In the XDS profile IHE defines the mechanisms to publish and index the images in the central document registry. It also defines the mechanisms that each hospital will use to retrieve those images regardless on the Hospital PACS they are stored. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y; Pelivanov, Ivan M; O'Donnell, Matthew
2015-02-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme.
Enhancements in medicine by integrating content based image retrieval in computer-aided diagnosis
NASA Astrophysics Data System (ADS)
Aggarwal, Preeti; Sardana, H. K.
2010-02-01
Computer-aided diagnosis (CAD) has become one of the major research subjects in medical imaging and diagnostic radiology. With cad, radiologists use the computer output as a "second opinion" and make the final decisions. Retrieving images is a useful tool to help radiologist to check medical image and diagnosis. The impact of contentbased access to medical images is frequently reported but existing systems are designed for only a particular context of diagnosis. The challenge in medical informatics is to develop tools for analyzing the content of medical images and to represent them in a way that can be efficiently searched and compared by the physicians. CAD is a concept established by taking into account equally the roles of physicians and computers. To build a successful computer aided diagnostic system, all the relevant technologies, especially retrieval need to be integrated in such a manner that should provide effective and efficient pre-diagnosed cases with proven pathology for the current case at the right time. In this paper, it is suggested that integration of content-based image retrieval (CBIR) in cad can bring enormous results in medicine especially in diagnosis. This approach is also compared with other approaches by highlighting its advantages over those approaches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kirkham, R.; Siddons, D.; Dunn, P.A.
2010-06-23
The Maia detector system is engineered for energy dispersive x-ray fluorescence spectroscopy and elemental imaging at photon rates exceeding 10{sup 7}/s, integrated scanning of samples for pixel transit times as small as 50 {micro}s and high definition images of 10{sup 8} pixels and real-time processing of detected events for spectral deconvolution and online display of pure elemental images. The system developed by CSIRO and BNL combines a planar silicon 384 detector array, application-specific integrated circuits for pulse shaping and peak detection and sampling and optical data transmission to an FPGA-based pipelined, parallel processor. This paper describes the system and themore » underpinning engineering solutions.« less
Lawford, Catherine E.
2014-01-01
This work develops a technique for kilovoltage cone‐beam CT (CBCT) dosimetry that incorporates both point dose and integral dose in the form of dose length product, and uses readily available radiotherapy equipment. The dose from imaging protocols for a range of imaging parameters and treatment sites was evaluated. Conventional CT dosimetry using 100 mm long pencil chambers has been shown to be inadequate for the large fields in CBCT and has been replaced in this work by a combination of point dose and integral dose. Absolute dose measurements were made with a small volume ion chamber at the central slice of a radiotherapy phantom. Beam profiles were measured using a linear diode array large enough to capture the entire imaging field. These profiles were normalized to absolute dose to form dose line integrals, which were then weighted with radial depth to form the DLPCBCT. This metric is analogous to the standard dose length product (DLP), but derived differently to suit the unique properties of CBCT. Imaging protocols for head and neck, chest, and prostate sites delivered absolute doses of 0.9, 2.2, and 2.9 cGy to the center of the phantom, and DLPCBCT of 28.2, 665.1, and 565.3 mGy.cm, respectively. Results are displayed as dose per 100 mAs and as a function of key imaging parameters such as kVp, mAs, and collimator selection in a summary table. DLPCBCT was found to correlate closely with the dimension of the imaging region and provided a good indication of integral dose. It is important to assess integral dose when determining radiation doses to patients using CBCT. By incorporating measured beam profiles and DLP, this technique provides a CBCT dosimetry in radiotherapy phantoms and allows the prediction of imaging dose for new CBCT protocols. PACS number: 87.57.uq PMID:25207398
Scandurra, Daniel; Lawford, Catherine E
2014-07-08
This work develops a technique for kilovoltage cone-beam CT (CBCT) dosimetry that incorporates both point dose and integral dose in the form of dose length product, and uses readily available radiotherapy equipment. The dose from imaging protocols for a range of imaging parameters and treatment sites was evaluated. Conventional CT dosimetry using 100 mm long pencil chambers has been shown to be inadequate for the large fields in CBCT and has been replaced in this work by a combination of point dose and integral dose. Absolute dose measurements were made with a small volume ion chamber at the central slice of a radiotherapy phantom. Beam profiles were measured using a linear diode array large enough to capture the entire imaging field. These profiles were normalized to absolute dose to form dose line integrals, which were then weighted with radial depth to form the DLPCBCT. This metric is analogous to the standard dose length product (DLP), but derived differently to suit the unique properties of CBCT. Imaging protocols for head and neck, chest, and prostate sites delivered absolute doses of 0.9, 2.2, and 2.9 cGy to the center of the phantom, and DLPCBCT of 28.2, 665.1, and 565.3mGy.cm, respectively. Results are displayed as dose per 100 mAs and as a function of key imaging parameters such as kVp, mAs, and collimator selection in a summary table. DLPCBCT was found to correlate closely with the dimension of the imaging region and provided a good indication of integral dose. It is important to assess integral dose when determining radiation doses to patients using CBCT. By incorporating measured beam profiles and DLP, this technique provides a CBCT dosimetry in radiotherapy phantoms and allows the prediction of imaging dose for new CBCT protocols.
Integrated Optics for Planar imaging and Optical Signal Processing
NASA Astrophysics Data System (ADS)
Song, Qi
Silicon photonics is a subject of growing interest with the potential of delivering planar electro-optical devices with chip scale integration. Silicon-on-insulator (SOI) technology has provided a marvelous platform for photonics industry because of its advantages in integration capability in CMOS circuit and countless nonlinearity applications in optical signal processing. This thesis is focused on the investigation of planar imaging techniques on SOI platform and potential applications in ultra-fast optical signal processing. In the first part, a general review and background introduction about integrated photonics circuit and planar imaging technique are provided. In chapter 2, planar imaging platform is realized by a silicon photodiode on SOI chip. Silicon photodiode on waveguide provides a high numerical aperture for an imaging transceiver pixel. An erbium doped Y2O3 particle is excited by 1550nm Laser and the fluorescent image is obtained with assistance of the scanning system. Fluorescence image is reconstructed by using image de-convolution technique. Under photovoltaic mode, we use an on-chip photodiode and an external PIN photodiode to realize similar resolution as 5μm. In chapter 3, a time stretching technique is developed to a spatial domain to realize a 2D imaging system as an ultrafast imaging tool. The system is evaluated based on theoretical calculation. The experimental results are shown for a verification of system capability to imaging a micron size particle or a finger print. Meanwhile, dynamic information for a moving object is also achieved by correlation algorithm. In chapter 4, the optical leaky wave antenna based on SOI waveguide has been utilized for imaging applications and extensive numerical studied has been conducted. and the theoretical explanation is supported by leaky wave theory. The highly directive radiation has been obtained from the broadside with 15.7 dB directivity and a 3dB beam width of ΔØ 3dB ≈ 1.65° in free space environment when β -1 = 2.409 × 105/m, α=4.576 ×103/m. At the end, electronics beam-steering principle has been studied and the comprehensive model has been built to explain carrier transformation behavior in a PIN junction as individual silicon perturbation. Results show that 1019/cm3 is possible obtained with electron injection mechanism. Although the radiation modulation based on carrier injection of 1019/cm3 gives 0.5dB variation, resonant structure, such as Fabry Perrot Cavity, can be integrated with LOWAs to enhance modulation effect.
Erdenebat, Munkh-Uchral; Kim, Byeong-Jun; Piao, Yan-Ling; Park, Seo-Yeon; Kwon, Ki-Chul; Piao, Mei-Lan; Yoo, Kwan-Hee; Kim, Nam
2017-10-01
A mobile three-dimensional image acquisition and reconstruction system using a computer-generated integral imaging technique is proposed. A depth camera connected to the mobile device acquires the color and depth data of a real object simultaneously, and an elemental image array is generated based on the original three-dimensional information for the object, with lens array specifications input into the mobile device. The three-dimensional visualization of the real object is reconstructed on the mobile display through optical or digital reconstruction methods. The proposed system is implemented successfully and the experimental results certify that the system is an effective and interesting method of displaying real three-dimensional content on a mobile device.
Bouslimi, D; Coatrieux, G; Roux, Ch
2011-01-01
In this paper, we propose a new joint watermarking/encryption algorithm for the purpose of verifying the reliability of medical images in both encrypted and spatial domains. It combines a substitutive watermarking algorithm, the quantization index modulation (QIM), with a block cipher algorithm, the Advanced Encryption Standard (AES), in CBC mode of operation. The proposed solution gives access to the outcomes of the image integrity and of its origins even though the image is stored encrypted. Experimental results achieved on 8 bits encoded Ultrasound images illustrate the overall performances of the proposed scheme. By making use of the AES block cipher in CBC mode, the proposed solution is compliant with or transparent to the DICOM standard.
The Research on Lucalibration of GF-4 Satellite
NASA Astrophysics Data System (ADS)
Qi, W.; Tan, W.
2018-04-01
Starting from the lunar observation requirements of the GF-4 satellite, the main index such as the resolution, the imaging field, the reflect radiance and the imaging integration time are analyzed combined with the imaging features and parameters of this camera. The analysis results show that the lunar observation of GF-4 satellite has high resolution, wide field which can image the whole moon, the radiance of the pupil which is reflected by the moon is within the dynamic range of the camera, and the lunar image quality can be guaranteed better by setting up a reasonable integration time. At the same time, the radiation transmission model of the lunar radiation calibration is trace and the radiation degree is evaluated.
NASA Astrophysics Data System (ADS)
Wang, Yao-yao; Zhang, Juan; Zhao, Xue-wei; Song, Li-pei; Zhang, Bo; Zhao, Xing
2018-03-01
In order to improve depth extraction accuracy, a method using moving array lenslet technique (MALT) in pickup stage is proposed, which can decrease the depth interval caused by pixelation. In this method, the lenslet array is moved along the horizontal and vertical directions simultaneously for N times in a pitch to get N sets of elemental images. Computational integral imaging reconstruction method for MALT is taken to obtain the slice images of the 3D scene, and the sum modulus (SMD) blur metric is taken on these slice images to achieve the depth information of the 3D scene. Simulation and optical experiments are carried out to verify the feasibility of this method.
Enabling outsourcing XDS for imaging on the public cloud.
Ribeiro, Luís S; Rodrigues, Renato P; Costa, Carlos; Oliveira, José Luís
2013-01-01
Picture Archiving and Communication System (PACS) has been the main paradigm in supporting medical imaging workflows during the last decades. Despite its consolidation, the appearance of Cross-Enterprise Document Sharing for imaging (XDS-I), within IHE initiative, constitutes a great opportunity to readapt PACS workflow for inter-institutional data exchange. XDS-I provides a centralized discovery of medical imaging and associated reports. However, the centralized XDS-I actors (document registry and repository) must be deployed in a trustworthy node in order to safeguard patient privacy, data confidentiality and integrity. This paper presents XDS for Protected Imaging (XDS-p), a new approach to XDS-I that is capable of being outsourced (e.g. Cloud Computing) while maintaining privacy, confidentiality, integrity and legal concerns about patients' medical information.
Multiple-viewing-zone integral imaging using a dynamic barrier array for three-dimensional displays.
Choi, Heejin; Min, Sung-Wook; Jung, Sungyong; Park, Jae-Hyeung; Lee, Byoungho
2003-04-21
In spite of many advantages of integral imaging, the viewing zone in which an observer can see three-dimensional images is limited within a narrow range. Here, we propose a novel method to increase the number of viewing zones by using a dynamic barrier array. We prove our idea by fabricating and locating the dynamic barrier array between a lens array and a display panel. By tilting the barrier array, it is possible to distribute images for each viewing zone. Thus, the number of viewing zones can be increased with an increment of the states of the barrier array tilt.
Thermal luminescence spectroscopy chemical imaging sensor.
Carrieri, Arthur H; Buican, Tudor N; Roese, Erik S; Sutter, James; Samuels, Alan C
2012-10-01
The authors present a pseudo-active chemical imaging sensor model embodying irradiative transient heating, temperature nonequilibrium thermal luminescence spectroscopy, differential hyperspectral imaging, and artificial neural network technologies integrated together. We elaborate on various optimizations, simulations, and animations of the integrated sensor design and apply it to the terrestrial chemical contamination problem, where the interstitial contaminant compounds of detection interest (analytes) comprise liquid chemical warfare agents, their various derivative condensed phase compounds, and other material of a life-threatening nature. The sensor must measure and process a dynamic pattern of absorptive-emissive middle infrared molecular signature spectra of subject analytes to perform its chemical imaging and standoff detection functions successfully.
Multispectral high-resolution hologram generation using orthographic projection images
NASA Astrophysics Data System (ADS)
Muniraj, I.; Guo, C.; Sheridan, J. T.
2016-08-01
We present a new method of synthesizing a digital hologram of three-dimensional (3D) real-world objects from multiple orthographic projection images (OPI). A high-resolution multiple perspectives of 3D objects (i.e., two dimensional elemental image array) are captured under incoherent white light using synthetic aperture integral imaging (SAII) technique and their OPIs are obtained respectively. The reference beam is then multiplied with the corresponding OPI and integrated to form a Fourier hologram. Eventually, a modified phase retrieval algorithm (GS/HIO) is applied to reconstruct the hologram. The principle is validated experimentally and the results support the feasibility of the proposed method.
Welge, Weston A.; Barton, Jennifer K.
2015-01-01
Optical coherence tomography (OCT) is a useful imaging modality for detecting and monitoring diseases of the gastrointestinal tract and other tubular structures. The non-destructiveness of OCT enables time-serial studies in animal models. While turnkey commercial research OCT systems are plenty, researchers often require custom imaging probes. We describe the integration of a custom endoscope with a commercial swept-source OCT system and generalize this description to any imaging probe and OCT system. A numerical dispersion compensation method is also described. Example images demonstrate that OCT can visualize the mouse colon crypt structure and detect adenoma in vivo. PMID:26418811
Active pixel sensor array with multiresolution readout
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Kemeny, Sabrina E. (Inventor); Pain, Bedabrata (Inventor)
1999-01-01
An imaging device formed as a monolithic complementary metal oxide semiconductor integrated circuit in an industry standard complementary metal oxide semiconductor process, the integrated circuit including a focal plane array of pixel cells, each one of the cells including a photogate overlying the substrate for accumulating photo-generated charge in an underlying portion of the substrate and a charge coupled device section formed on the substrate adjacent the photogate having a sensing node and at least one charge coupled device stage for transferring charge from the underlying portion of the substrate to the sensing node. There is also a readout circuit, part of which can be disposed at the bottom of each column of cells and be common to all the cells in the column. The imaging device can also include an electronic shutter formed on the substrate adjacent the photogate, and/or a storage section to allow for simultaneous integration. In addition, the imaging device can include a multiresolution imaging circuit to provide images of varying resolution. The multiresolution circuit could also be employed in an array where the photosensitive portion of each pixel cell is a photodiode. This latter embodiment could further be modified to facilitate low light imaging.
Echo Decorrelation Imaging of Rabbit Liver and VX2 Tumor during In Vivo Ultrasound Ablation.
Fosnight, Tyler R; Hooi, Fong Ming; Keil, Ryan D; Ross, Alexander P; Subramanian, Swetha; Akinyi, Teckla G; Killin, Jakob K; Barthe, Peter G; Rudich, Steven M; Ahmad, Syed A; Rao, Marepalli B; Mast, T Douglas
2017-01-01
In open surgical procedures, image-ablate ultrasound arrays performed thermal ablation and imaging on rabbit liver lobes with implanted VX2 tumor. Treatments included unfocused (bulk ultrasound ablation, N = 10) and focused (high-intensity focused ultrasound ablation, N = 13) exposure conditions. Echo decorrelation and integrated backscatter images were formed from pulse-echo data recorded during rest periods after each therapy pulse. Echo decorrelation images were corrected for artifacts using decorrelation measured prior to ablation. Ablation prediction performance was assessed using receiver operating characteristic curves. Results revealed significantly increased echo decorrelation and integrated backscatter in both ablated liver and ablated tumor relative to unablated tissue, with larger differences observed in liver than in tumor. For receiver operating characteristic curves computed from all ablation exposures, both echo decorrelation and integrated backscatter predicted liver and tumor ablation with statistically significant success, and echo decorrelation was significantly better as a predictor of liver ablation. These results indicate echo decorrelation imaging is a successful predictor of local thermal ablation in both normal liver and tumor tissue, with potential for real-time therapy monitoring. Copyright © 2016 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Providing integrity and authenticity in DICOM images: a novel approach.
Kobayashi, Luiz Octavio Massato; Furuie, Sergio Shiguemi; Barreto, Paulo Sergio Licciardi Messeder
2009-07-01
The increasing adoption of information systems in healthcare has led to a scenario where patient information security is more and more being regarded as a critical issue. Allowing patient information to be in jeopardy may lead to irreparable damage, physically, morally, and socially to the patient, potentially shaking the credibility of the healthcare institution. Medical images play a crucial role in such context, given their importance in diagnosis, treatment, and research. Therefore, it is vital to take measures in order to prevent tampering and determine their provenance. This demands adoption of security mechanisms to assure information integrity and authenticity. There are a number of works done in this field, based on two major approaches: use of metadata and use of watermarking. However, there still are limitations for both approaches that must be properly addressed. This paper presents a new method using cryptographic means to improve trustworthiness of medical images, providing a stronger link between the image and the information on its integrity and authenticity, without compromising image quality to the end user. Use of Digital Imaging and Communications in Medicine structures is also an advantage for ease of development and deployment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobias, B., E-mail: bjtobias@pppl.gov; Domier, C. W.; Luhmann, N. C.
2016-11-15
The critical component enabling electron cyclotron emission imaging (ECEI) and microwave imaging reflectometry (MIR) to resolve 2D and 3D electron temperature and density perturbations is the heterodyne imaging array that collects and downconverts radiated emission and/or reflected signals (50–150 GHz) to an intermediate frequency (IF) band (e.g. 0.1–18 GHz) that can be transmitted by a shielded coaxial cable for further filtering and detection. New circuitry has been developed for this task, integrating gallium arsenide (GaAs) monolithic microwave integrated circuits (MMICs) mounted on a liquid crystal polymer (LCP) substrate. The improved topology significantly increases electromagnetic shielding from out-of-band interference, leads tomore » 10× improvement in the signal-to-noise ratio, and dramatic cost savings through integration. The current design, optimized for reflectometry and edge radiometry on mid-sized tokamaks, has demonstrated >20 dB conversion gain in upper V-band (60-75 GHz). Implementation of the circuit in a multi-channel electron cyclotron emission imaging (ECEI) array will improve the diagnosis of edge-localized modes and fluctuations of the high-confinement, or H-mode, pedestal.« less
Tobias, B.; Domier, C. W.; Luhmann, Jr., N. C.; ...
2016-07-25
The critical component enabling electron cyclotron emission imaging (ECEI) and microwave imaging reflectometry (MIR) to resolve 2D and 3D electron temperature and density perturbations is the heterodyne imaging array that collects and downconverts radiated emission and/or reflected signals (50-150 GHz) to an intermediate frequency (IF) band (e.g. 0.1-18 GHz) that can be transmitted by a shielded coaxial cable for further filtering and detection. New circuitry has been developed for this task, integrating gallium arsenide (GaAs) monolithic microwave integrated circuits (MMICs) mounted on a liquid crystal polymer (LCP) substrate. The improved topology significantly increases electromagnetic shielding from out-of-band interference, leads tomore » 10x improvement in the signal-to-noise ratio, and dramatic cost savings through integration. The current design, optimized for reflectometry and edge radiometry on mid-sized tokamaks, has demonstrated >20 dB conversion gain in upper V-band (60-75 GHz). As a result, implementation of the circuit in a multi-channel electron cyclotron emission imaging (ECEI) array will improve the diagnosis of edge-localized modes and fluctuations of the high-confinement, or H-mode, pedestal.« less
Coleman, R Edward; Delbeke, Dominique; Guiberteau, Milton J; Conti, Peter S; Royal, Henry D; Weinreb, Jeffrey C; Siegel, Barry A; Federle, Michael F; Townsend, David W; Berland, Lincoln L
2005-07-01
Rapid advances in imaging technology are a challenge for health care professionals, who must determine how best to use these technologies to optimize patient care and outcomes. Hybrid imaging instrumentation, combining 2 or more new or existing technologies, each with its own separate history of clinical evolution, such as PET and CT, may be especially challenging. CT and PET provide complementary anatomic information and molecular information, respectively, with PET giving specificity to anatomic findings and CT offering precise localization of metabolic activity. Historically, the acquisition and interpretation of the 2 image sets have been performed separately and very often at different times and locales. Recently, integrated PET/CT systems have become available; these systems provide PET and CT images that are acquired nearly simultaneously and are capable of producing superimposed, coregistered images, greatly facilitating interpretation. As the implementation of this integrated technology has become more widespread in the setting of oncologic imaging, questions and concerns regarding equipment specifications, image acquisition protocols, supervision, interpretation, professional qualifications, and safety have arisen. This article summarizes the discussions and observations surrounding these issues by a collaborative working group consisting of representatives from the American College of Radiology, the Society of Nuclear Medicine, and the Society of Computed Body Tomography and Magnetic Resonance.
Coleman, R Edward; Delbeke, Dominique; Guiberteau, Milton J; Conti, Peter S; Royal, Henry D; Weinreb, Jeffrey C; Siegel, Barry A; Federle, Michael P; Townsend, David W; Berland, Lincoln L
2005-07-01
Rapid advances in imaging technology are a challenge for health care professionals, who must determine how best to use these technologies to optimize patient care and outcomes. Hybrid imaging instrumentation, combining 2 or more new or existing technologies, each with its own separate history of clinical evolution, such as PET and CT, may be especially challenging. CT and PET provide complementary anatomic information and molecular information, respectively, with PET giving specificity to anatomic findings and CT offering precise localization of metabolic activity. Historically, the acquisition and interpretation of the 2 image sets have been performed separately and very often at different times and locales. Recently, integrated PET/CT systems have become available; these systems provide PET and CT images that are acquired nearly simultaneously and are capable of producing superimposed, coregistered images, greatly facilitating interpretation. As the implementation of this integrated technology has become more widespread in the setting of oncologic imaging, questions and concerns regarding equipment specifications, image acquisition protocols, supervision, interpretation, professional qualifications, and safety have arisen. This article summarizes the discussions and observations surrounding these issues by a collaborative working group consisting of representatives from the American College of Radiology, the Society of Nuclear Medicine, and the Society of Computed Body Tomography and Magnetic Resonance.
Comprehensive approach to image-guided surgery
NASA Astrophysics Data System (ADS)
Peters, Terence M.; Comeau, Roch M.; Kasrai, Reza; St. Jean, Philippe; Clonda, Diego; Sinasac, M.; Audette, Michel A.; Fenster, Aaron
1998-06-01
Image-guided surgery has evolved over the past 15 years from stereotactic planning, where the surgeon planned approaches to intracranial targets on the basis of 2D images presented on a simple workstation, to the use of sophisticated multi- modality 3D image integration in the operating room, with guidance being provided by mechanically, optically or electro-magnetically tracked probes or microscopes. In addition, sophisticated procedures such as thalamotomies and pallidotomies to relieve the symptoms of Parkinson's disease, are performed with the aid of volumetric atlases integrated with the 3D image data. Operations that are performed stereotactically, that is to say via a small burr- hole in the skull, are able to assume that the information contained in the pre-operative imaging study, accurately represents the brain morphology during the surgical procedure. On the other hand, preforming a procedure via an open craniotomy presents a problem. Not only does tissue shift when the operation begins, even the act of opening the skull can cause significant shift of the brain tissue due to the relief of intra-cranial pressure, or the effect of drugs. Means of tracking and correcting such shifts from an important part of the work in the field of image-guided surgery today. One approach has ben through the development of intra-operative MRI imaging systems. We describe an alternative approach which integrates intra-operative ultrasound with pre-operative MRI to track such changes in tissue morphology.
OC ToGo: bed site image integration into OpenClinica with mobile devices
NASA Astrophysics Data System (ADS)
Haak, Daniel; Gehlen, Johan; Jonas, Stephan; Deserno, Thomas M.
2014-03-01
Imaging and image-based measurements nowadays play an essential role in controlled clinical trials, but electronic data capture (EDC) systems insufficiently support integration of captured images by mobile devices (e.g. smartphones and tablets). The web application OpenClinica has established as one of the world's leading EDC systems and is used to collect, manage and store data of clinical trials in electronic case report forms (eCRFs). In this paper, we present a mobile application for instantaneous integration of images into OpenClinica directly during examination on patient's bed site. The communication between the Android application and OpenClinica is based on the simple object access protocol (SOAP) and representational state transfer (REST) web services for metadata, and secure file transfer protocol (SFTP) for image transfer, respectively. OpenClinica's web services are used to query context information (e.g. existing studies, events and subjects) and to import data into the eCRF, as well as export of eCRF metadata and structural information. A stable image transfer is ensured and progress information (e.g. remaining time) visualized to the user. The workflow is demonstrated for a European multi-center registry, where patients with calciphylaxis disease are included. Our approach improves the EDC workflow, saves time, and reduces costs. Furthermore, data privacy is enhanced, since storage of private health data on the imaging devices becomes obsolete.
An Integrated Imaging Detector of Polarization and Spectral Content
NASA Technical Reports Server (NTRS)
Rust, D. M.; Thompson, K. E.
1993-01-01
A new type of image detector has been designed to simultaneously analyze the polarization of light at all picture elements in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a charge-coupled device (CCD), with signal-analysis circuitry and analog-to-digital converters, all integrated on a silicon chip. It should be capable of 1:10(exp 4) polarization discrimination. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Innovations in the IDID include (1) two interleaved 512 x 1024-pixel imaging arrays (one for each polarization plane); (2) large dynamic range (well depth of 10(exp 6) electrons per pixel); (3) simultaneous readout of both images at 10 million pixels per second each; (4) on-chip analog signal processing to produce polarization maps in real time; (5) on-chip 10-bit A/D conversion. When used with a lithium-niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can collect and analyze simultaneous images at two wavelengths. Precise photometric analysis of molecular or atomic concentrations in the atmosphere is one suggested application. When used in a solar telescope, the IDID will charge the polarization, which can then be converted to maps of the vector magnetic fields on the solar surface.
Deserno, Thomas M; Haak, Daniel; Brandenburg, Vincent; Deserno, Verena; Classen, Christoph; Specht, Paula
2014-12-01
Especially for investigator-initiated research at universities and academic institutions, Internet-based rare disease registries (RDR) are required that integrate electronic data capture (EDC) with automatic image analysis or manual image annotation. We propose a modular framework merging alpha-numerical and binary data capture. In concordance with the Office of Rare Diseases Research recommendations, a requirement analysis was performed based on several RDR databases currently hosted at Uniklinik RWTH Aachen, Germany. With respect to the study management tool that is already successfully operating at the Clinical Trial Center Aachen, the Google Web Toolkit was chosen with Hibernate and Gilead connecting a MySQL database management system. Image and signal data integration and processing is supported by Apache Commons FileUpload-Library and ImageJ-based Java code, respectively. As a proof of concept, the framework is instantiated to the German Calciphylaxis Registry. The framework is composed of five mandatory core modules: (1) Data Core, (2) EDC, (3) Access Control, (4) Audit Trail, and (5) Terminology as well as six optional modules: (6) Binary Large Object (BLOB), (7) BLOB Analysis, (8) Standard Operation Procedure, (9) Communication, (10) Pseudonymization, and (11) Biorepository. Modules 1-7 are implemented in the German Calciphylaxis Registry. The proposed RDR framework is easily instantiated and directly integrates image management and analysis. As open source software, it may assist improved data collection and analysis of rare diseases in near future.
Enhancing security of fingerprints through contextual biometric watermarking.
Noore, Afzel; Singh, Richa; Vatsa, Mayank; Houck, Max M
2007-07-04
This paper presents a novel digital watermarking technique using face and demographic text data as multiple watermarks for verifying the chain of custody and protecting the integrity of a fingerprint image. The watermarks are embedded in selected texture regions of a fingerprint image using discrete wavelet transform. Experimental results show that modifications in these locations are visually imperceptible and maintain the minutiae details. The integrity of the fingerprint image is verified through the high matching scores obtained from an automatic fingerprint identification system. There is also a high degree of visual correlation between the embedded images, and the extracted images from the watermarked fingerprint. The degree of similarity is computed using pixel-based metrics and human visual system metrics. The results also show that the proposed watermarked fingerprint and the extracted images are resilient to common attacks such as compression, filtering, and noise.
Medical image security in a HIPAA mandated PACS environment.
Cao, F; Huang, H K; Zhou, X Q
2003-01-01
Medical image security is an important issue when digital images and their pertinent patient information are transmitted across public networks. Mandates for ensuring health data security have been issued by the federal government such as Health Insurance Portability and Accountability Act (HIPAA), where healthcare institutions are obliged to take appropriate measures to ensure that patient information is only provided to people who have a professional need. Guidelines, such as digital imaging and communication in medicine (DICOM) standards that deal with security issues, continue to be published by organizing bodies in healthcare. However, there are many differences in implementation especially for an integrated system like picture archiving and communication system (PACS), and the infrastructure to deploy these security standards is often lacking. Over the past 6 years, members in the Image Processing and Informatics Laboratory, Childrens Hospital, Los Angeles/University of Southern California, have actively researched image security issues related to PACS and teleradiology. The paper summarizes our previous work and presents an approach to further research on the digital envelope (DE) concept that provides image integrity and security assurance in addition to conventional network security protection. The DE, including the digital signature (DS) of the image as well as encrypted patient information from the DICOM image header, can be embedded in the background area of the image as an invisible permanent watermark. The paper outlines the systematic development, evaluation and deployment of the DE method in a PACS environment. We have also proposed a dedicated PACS security server that will act as an image authority to check and certify the image origin and integrity upon request by a user, and meanwhile act also as a secure DICOM gateway to the outside connections and a PACS operation monitor for HIPAA supporting information. Copyright 2002 Elsevier Science Ltd.
A D-band Active Imager in a SiGe HBT Technology
NASA Astrophysics Data System (ADS)
Yoon, Daekeun; Song, Kiryong; Kim, Jungsoo; Kaynak, Mehmet; Tillack, Bernd; Rieh, Jae-Sung
2015-04-01
In this paper, an amplifier and a detector operating near 140 GHz have been developed and integrated together with an on-chip antenna for an integrated active imager based on a 0.13-μm SiGe HBT technology. The 5-stage differential common-emitter (CE) amplifier shows a peak gain of 14 dB and noise figure (NF) down to 10 dB around 140 GHz with a DC power dissipation of 18 mW. The common-base (CB) differential detector exhibits a peak responsivity of 52.5 kV/W and a noise equivalent power (NEP) of 3.3 pW/Hz1/2. For the integrated imager, a peak responsivity of 1,740 kV/W and a minimum NEP of 80 fW/Hz1/2 were achieved with a DC power dissipation of 18 mW. With the fabricated active imager with on-chip antenna, which occupies an area of 2,200 × 600 μm2 including the antenna and bonding pads, images of various objects were successfully acquired.
IMAGE: A Design Integration Framework Applied to the High Speed Civil Transport
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1993-01-01
Effective design of the High Speed Civil Transport requires the systematic application of design resources throughout a product's life-cycle. Information obtained from the use of these resources is used for the decision-making processes of Concurrent Engineering. Integrated computing environments facilitate the acquisition, organization, and use of required information. State-of-the-art computing technologies provide the basis for the Intelligent Multi-disciplinary Aircraft Generation Environment (IMAGE) described in this paper. IMAGE builds upon existing agent technologies by adding a new component called a model. With the addition of a model, the agent can provide accountable resource utilization in the presence of increasing design fidelity. The development of a zeroth-order agent is used to illustrate agent fundamentals. Using a CATIA(TM)-based agent from previous work, a High Speed Civil Transport visualization system linking CATIA, FLOPS, and ASTROS will be shown. These examples illustrate the important role of the agent technologies used to implement IMAGE, and together they demonstrate that IMAGE can provide an integrated computing environment for the design of the High Speed Civil Transport.
Integrating DICOM structure reporting (SR) into the medical imaging informatics data grid
NASA Astrophysics Data System (ADS)
Lee, Jasper; Le, Anh; Liu, Brent
2008-03-01
The Medical Imaging Informatics (MI2) Data Grid developed at the USC Image Processing and Informatics Laboratory enables medical images to be shared securely between multiple imaging centers. Current applications include an imaging-based clinical trial setting where multiple field sites perform image acquisition and a centralized radiology core performs image analysis, often using computer-aided diagnosis tools (CAD) that generate a DICOM-SR to report their findings and measurements. As more and more CAD tools are being developed in the radiology field, the generated DICOM Structure Reports (SR) holding key radiological findings and measurements that are not part of the DICOM image need to be integrated into the existing Medical Imaging Informatics Data Grid with the corresponding imaging studies. We will discuss the significance and method involved in adapting DICOM-SR into the Medical Imaging Informatics Data Grid. The result is a MI2 Data Grid repository from which users can send and receive DICOM-SR objects based on the imaging-based clinical trial application. The services required to extract and categorize information from the structured reports will be discussed, and the workflow to store and retrieve a DICOM-SR file into the existing MI2 Data Grid will be shown.
Liu, Baolin; Wang, Zhongning; Jin, Zhixing
2009-09-11
In real life, the human brain usually receives information through visual and auditory channels and processes the multisensory information, but studies on the integration processing of the dynamic visual and auditory information are relatively few. In this paper, we have designed an experiment, where through the presentation of common scenario, real-world videos, with matched and mismatched actions (images) and sounds as stimuli, we aimed to study the integration processing of synchronized visual and auditory information in videos of real-world events in the human brain, through the use event-related potentials (ERPs) methods. Experimental results showed that videos of mismatched actions (images) and sounds would elicit a larger P400 as compared to videos of matched actions (images) and sounds. We believe that the P400 waveform might be related to the cognitive integration processing of mismatched multisensory information in the human brain. The results also indicated that synchronized multisensory information would interfere with each other, which would influence the results of the cognitive integration processing.
Lee, Sunki; Lee, Min Woo; Cho, Han Saem; Song, Joon Woo; Nam, Hyeong Soo; Oh, Dong Joo; Park, Kyeongsoon; Oh, Wang-Yuhl; Yoo, Hongki; Kim, Jin Won
2014-08-01
Lipid-rich inflamed coronary plaques are prone to rupture. The purpose of this study was to assess lipid-rich inflamed plaques in vivo using fully integrated high-speed optical coherence tomography (OCT)/near-infrared fluorescence (NIRF) molecular imaging with a Food and Drug Administration-approved indocyanine green (ICG). An integrated high-speed intravascular OCT/NIRF imaging catheter and a dual-modal OCT/NIRF system were constructed based on a clinical OCT platform. For imaging lipid-rich inflamed plaques, the Food and Drug Administration-approved NIRF-emitting ICG (2.25 mg/kg) or saline was injected intravenously into rabbit models with experimental atheromata induced by balloon injury and 12- to 14-week high-cholesterol diets. Twenty minutes after injection, in vivo OCT/NIRF imaging of the infrarenal aorta and iliac arteries was acquired only under contrast flushing through catheter (pullback speed up to ≤20 mm/s). NIRF signals were strongly detected in the OCT-visualized atheromata of the ICG-injected rabbits. The in vivo NIRF target-to-background ratio was significantly larger in the ICG-injected rabbits than in the saline-injected controls (P<0.01). Ex vivo peak plaque target-to-background ratios were significantly higher in ICG-injected rabbits than in controls (P<0.01) on fluorescence reflectance imaging, which correlated well with the in vivo target-to-background ratios (P<0.01; r=0.85) without significant bias (0.41). Cellular ICG uptake, correlative fluorescence microscopy, and histopathology also corroborated the in vivo imaging findings. Integrated OCT/NIRF structural/molecular imaging with a Food and Drug Administration -approved ICG accurately identified lipid-rich inflamed atheromata in coronary-sized vessels. This highly translatable dual-modal imaging approach could enhance our capabilities to detect high-risk coronary plaques. © 2014 American Heart Association, Inc.
NASA Astrophysics Data System (ADS)
Wachowicz, K.; Murray, B.; Fallone, B. G.
2018-06-01
The recent interest in the integration of external beam radiotherapy with a magnetic resonance (MR) imaging unit offers the potential for real-time adaptive tumour tracking during radiation treatment. The tracking of large tumours which follow a rapid trajectory may best be served by the generation of a projection image from the perspective of the beam source, or ‘beam’s eye view’ (BEV). This type of image projection represents the path of the radiation beam, thus enabling rapid compensations for target translations, rotations and deformations, as well time-dependent critical structure avoidance. MR units have been traditionally incapable of this type of imaging except through lengthy 3D acquisitions and ray tracing procedures. This work investigates some changes to the traditional MR scanner architecture that would permit the direct acquisition of a BEV image suitable for integration with external beam radiotherapy. Based on the theory presented in this work, a phantom was imaged with nonlinear encoding-gradient field patterns to demonstrate the technique. The phantom was constructed with agarose gel tubes spaced two cm apart at their base and oriented to converge towards an imaginary beam source 100 cm away. A corresponding virtual phantom was also created and subjected to the same encoding technique as in the physical demonstration, allowing the method to be tested without hardware limitations. The experimentally acquired and simulated images indicate the feasibility of the technique, showing a substantial amount of blur reduction in a diverging phantom compared to the conventional imaging geometry, particularly with the nonlinear gradients ideally implemented. The theory is developed to demonstrate that the method can be adapted in a number of different configurations to accommodate all proposed integration schemes for MR units and radiotherapy sources. Depending on the configuration, the implementation of this technique will require between two and four additional nonlinear encoding coils.
Ziegler, Susanne; Jakoby, Bjoern W; Braun, Harald; Paulus, Daniel H; Quick, Harald H
2015-12-01
In integrated PET/MR hybrid imaging the evaluation of PET performance characteristics according to the NEMA standard NU 2-2007 is challenging because of incomplete MR-based attenuation correction (AC) for phantom imaging. In this study, a strategy for CT-based AC of the NEMA image quality (IQ) phantom is assessed. The method is systematically evaluated in NEMA IQ phantom measurements on an integrated PET/MR system. NEMA IQ measurements were performed on the integrated 3.0 Tesla PET/MR hybrid system (Biograph mMR, Siemens Healthcare). AC of the NEMA IQ phantom was realized by an MR-based and by a CT-based method. The suggested CT-based AC uses a template μ-map of the NEMA IQ phantom and a phantom holder for exact repositioning of the phantom on the systems patient table. The PET image quality parameters contrast recovery, background variability, and signal-to-noise ratio (SNR) were determined and compared for both phantom AC methods. Reconstruction parameters of an iterative 3D OP-OSEM reconstruction were optimized for highest lesion SNR in NEMA IQ phantom imaging. Using a CT-based NEMA IQ phantom μ-map on the PET/MR system is straightforward and allowed performing accurate NEMA IQ measurements on the hybrid system. MR-based AC was determined to be insufficient for PET quantification in the tested NEMA IQ phantom because only photon attenuation caused by the MR-visible phantom filling but not the phantom housing is considered. Using the suggested CT-based AC, the highest SNR in this phantom experiment for small lesions (<= 13 mm) was obtained with 3 iterations, 21 subsets and 4 mm Gaussian filtering. This study suggests CT-based AC for the NEMA IQ phantom when performing PET NEMA IQ measurements on an integrated PET/MR hybrid system. The superiority of CT-based AC for this phantom is demonstrated by comparison to measurements using MR-based AC. Furthermore, optimized PET image reconstruction parameters are provided for the highest lesion SNR in NEMA IQ phantom measurements.
NASA Astrophysics Data System (ADS)
Viswanath, Satish; Tiwari, Pallavi; Rosen, Mark; Madabhushi, Anant
2008-03-01
Recently, in vivo Magnetic Resonance Imaging (MRI) and Magnetic Resonance Spectroscopy (MRS) have emerged as promising new modalities to aid in prostate cancer (CaP) detection. MRI provides anatomic and structural information of the prostate while MRS provides functional data pertaining to biochemical concentrations of metabolites such as creatine, choline and citrate. We have previously presented a hierarchical clustering scheme for CaP detection on in vivo prostate MRS and have recently developed a computer-aided method for CaP detection on in vivo prostate MRI. In this paper we present a novel scheme to develop a meta-classifier to detect CaP in vivo via quantitative integration of multimodal prostate MRS and MRI by use of non-linear dimensionality reduction (NLDR) methods including spectral clustering and locally linear embedding (LLE). Quantitative integration of multimodal image data (MRI and PET) involves the concatenation of image intensities following image registration. However multimodal data integration is non-trivial when the individual modalities include spectral and image intensity data. We propose a data combination solution wherein we project the feature spaces (image intensities and spectral data) associated with each of the modalities into a lower dimensional embedding space via NLDR. NLDR methods preserve the relationships between the objects in the original high dimensional space when projecting them into the reduced low dimensional space. Since the original spectral and image intensity data are divorced from their original physical meaning in the reduced dimensional space, data at the same spatial location can be integrated by concatenating the respective embedding vectors. Unsupervised consensus clustering is then used to partition objects into different classes in the combined MRS and MRI embedding space. Quantitative results of our multimodal computer-aided diagnosis scheme on 16 sets of patient data obtained from the ACRIN trial, for which corresponding histological ground truth for spatial extent of CaP is known, show a marginally higher sensitivity, specificity, and positive predictive value compared to corresponding CAD results with the individual modalities.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
XML-based scripting of multimodality image presentations in multidisciplinary clinical conferences
NASA Astrophysics Data System (ADS)
Ratib, Osman M.; Allada, Vivekanand; Dahlbom, Magdalena; Marcus, Phillip; Fine, Ian; Lapstra, Lorelle
2002-05-01
We developed a multi-modality image presentation software for display and analysis of images and related data from different imaging modalities. The software is part of a cardiac image review and presentation platform that supports integration of digital images and data from digital and analog media such as videotapes, analog x-ray films and 35 mm cine films. The software supports standard DICOM image files as well as AVI and PDF data formats. The system is integrated in a digital conferencing room that includes projections of digital and analog sources, remote videoconferencing capabilities, and an electronic whiteboard. The goal of this pilot project is to: 1) develop a new paradigm for image and data management for presentation in a clinically meaningful sequence adapted to case-specific scenarios, 2) design and implement a multi-modality review and conferencing workstation using component technology and customizable 'plug-in' architecture to support complex review and diagnostic tasks applicable to all cardiac imaging modalities and 3) develop an XML-based scripting model of image and data presentation for clinical review and decision making during routine clinical tasks and multidisciplinary clinical conferences.
Al-Bayati, Mohammad; Grueneisen, Johannes; Lütje, Susanne; Sawicki, Lino M; Suntharalingam, Saravanabavaan; Tschirdewahn, Stephan; Forsting, Michael; Rübben, Herbert; Herrmann, Ken; Umutlu, Lale; Wetter, Axel
2018-01-01
To evaluate diagnostic accuracy of integrated 68Gallium labelled prostate-specific membrane antigen (68Ga-PSMA)-11 positron emission tomography (PET)/MRI in patients with primary prostate cancer (PCa) as compared to multi-parametric MRI. A total of 22 patients with recently diagnosed primary PCa underwent clinically indicated 68Ga-PSMA-11 PET/CT for initial staging followed by integrated 68Ga-PSMA-11 PET/MRI. Images of multi-parametric magnetic resonance imaging (mpMRI), PET and PET/MRI were evaluated separately by applying Prostate Imaging Reporting and Data System (PIRADSv2) for mpMRI and a 5-point Likert scale for PET and PET/MRI. Results were compared with pathology reports of biopsy or resection. Statistical analyses including receiver operating characteristics analysis were performed to compare the diagnostic performance of mpMRI, PET and PET/MRI. PET and integrated PET/MRI demonstrated a higher diagnostic accuracy than mpMRI (area under the curve: mpMRI: 0.679, PET and PET/MRI: 0.951). The proportion of equivocal results (PIRADS 3 and Likert 3) was considerably higher in mpMRI than in PET and PET/MRI. In a notable proportion of equivocal PIRADS results, PET led to a correct shift towards higher suspicion of malignancy and enabled correct lesion classification. Integrated 68Ga-PSMA-11 PET/MRI demonstrates higher diagnostic accuracy than mpMRI and is particularly valuable in tumours with equivocal results from PIRADS classification. © 2018 S. Karger AG, Basel.
Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.
Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A
2010-05-25
Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).
Passive lighting responsive three-dimensional integral imaging
NASA Astrophysics Data System (ADS)
Lou, Yimin; Hu, Juanmei
2017-11-01
A three dimensional (3D) integral imaging (II) technique with a real-time passive lighting responsive ability and vivid 3D performance has been proposed and demonstrated. Some novel lighting responsive phenomena, including light-activated 3D imaging, and light-controlled 3D image scaling and translation, have been realized optically without updating images. By switching the on/off state of a point light source illuminated on the proposed II system, the 3D images can show/hide independent of the diffused illumination background. By changing the position or illumination direction of the point light source, the position and magnification of the 3D image can be modulated in real time. The lighting responsive mechanism of the 3D II system is deduced analytically and verified experimentally. A flexible thin film lighting responsive II system with a 0.4 mm thickness was fabricated. This technique gives some additional degrees of freedom in order to design the II system and enable the virtual 3D image to interact with the real illumination environment in real time.
NASA Technical Reports Server (NTRS)
Thompson, Karl E.; Rust, David M.; Chen, Hua
1995-01-01
A new type of image detector has been designed to analyze the polarization of light simultaneously at all picture elements (pixels) in a scene. The Integrated Dual Imaging Detector (IDID) consists of a polarizing beamsplitter bonded to a custom-designed charge-coupled device with signal-analysis circuitry, all integrated on a silicon chip. The IDID should simplify the design and operation of imaging polarimeters and spectroscopic imagers used, for example, in atmospheric and solar research. Other applications include environmental monitoring and robot vision. Innovations in the IDID include two interleaved 512 x 1024 pixel imaging arrays (one for each polarization plane), large dynamic range (well depth of 10(exp 6) electrons per pixel), simultaneous readout and display of both images at 10(exp 6) pixels per second, and on-chip analog signal processing to produce polarization maps in real time. When used with a lithium niobate Fabry-Perot etalon or other color filter that can encode spectral information as polarization, the IDID can reveal tiny differences between simultaneous images at two wavelengths.
SU-F-J-143: Initial Assessment of Image Quality of An Integrated MR-Linac System with ACR Phantom
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, J; Fuller, C; Yung, J
Purpose/Objective(s): To assess the image quality of an integrated MR-Linac system and compare with other MRI systems that are primarily used for diagnostic purposes. Materials/Methods: An ACR MRI quality control (QC) phantom was used to evaluate the image quality of a fully integrated 1.5T MRI-Linac system recently installed at our institution. This system has a new split magnet design which gives the magnetic field strength of 1.5T. All images were acquired with a set of phased-array surface coils which are designed to have minimal attention of radiation beam. The anterior coil rests on a coil holder which keeps the anteriormore » coil’s position consistent for QA purposes. The posterior coil is imbedded in the patient couch. Multiple sets of T1, T2/PD images were acquired using the protocols as prescribed by the ACR on three different dates, ranging 3 months apart. Results: The geometric distortion are within 0.5 mm in the axial scans and within 1mm in the saggital (z-direction) scans. Slice thickness accuracy, image uniformity, ghosting ratio, high contrast detectability are comparable to other 1.5T diagnostic MRI scanners. The low-contrast object detectability are lower comparatively, which is a result of using the body array coil. Additionally, the beam’s-eye-view images (oblique coronal and saggital images) have minimal geometric distortion at all linac gantry angles tested. No observable changes or drift in image quality is found from images acquired 3 month apart. Conclusion: Despite the use of a body array surface coil, the image quality is comparable to that of an 1.5T MRI scanner and is of sufficient quality to pass the ACR MRI accreditation program. The geometric distortion of the MRI system of the integrated MR-Linac is within 1mm for an object size similar to the ACR phantom, sufficient for radiation therapy treatment purpose. The authors received corporate sponsored research grants from Elekta which is the vendor for the MR-Linac evaluated in this study.« less
Segmentation via fusion of edge and needle map
NASA Astrophysics Data System (ADS)
Ahn, Hong-Young; Tou, Julius T.
1991-03-01
This paper presents an integrated image segmentation method using edge and needle map which compensates deficiencies of using either edge-based approach or region-based approach. Segmentation of an image is the first and most difficult step toward symbolic transformation of a raw image, which is essential in image understanding. In industrial applications, the task is further complicated by the ubiquitous presence of specularity in most industrial parts. Three images taken from three different illumination directions were used to separate specular and Lambertian components in the images. Needle map is generated from Lambertian component images using photometric stereo technique. In one channel, edges are extracted and linked from the averaged Lambertian images providing one source of segmentation. The other channel, Gaussian curvature and mean curvature values are estimated at each pixel from least square local surface fit of needle map. Labeled surface type image is then generated using the signs of Gaussian and mean curvatures, where one of ten surface types is assigned to each pixel. Connected regions of identical surface type pixels provide the first level grouping, a rough initial segmentation. Edge information and initial segmentation of surface type are fed to an integration module which interprets the edges and regions in a consistent way. During interpretation regions are merged or split, edges are discarded or generated depending upon global surface fit error and consistency with neighboring regions. The output of integrated segmentation is an explicit description of surface type and contours of each region which facilitates recognition, localization and attitude determination of objects in the image.
Progress in 3D imaging and display by integral imaging
NASA Astrophysics Data System (ADS)
Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.
2009-05-01
Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.
Integrated filter and detector array for spectral imaging
NASA Technical Reports Server (NTRS)
Labaw, Clayton C. (Inventor)
1992-01-01
A spectral imaging system having an integrated filter and photodetector array is disclosed. The filter has narrow transmission bands which vary in frequency along the photodetector array. The frequency variation of the transmission bands is matched to, and aligned with, the frequency variation of a received spectral image. The filter is deposited directly on the photodetector array by a low temperature deposition process. By depositing the filter directly on the photodetector array, permanent alignment is achieved for all temperatures, spectral crosstalk is substantially eliminated, and a high signal to noise ratio is achieved.
Picosecond imaging of signal propagation in integrated circuits
NASA Astrophysics Data System (ADS)
Frohmann, Sven; Dietz, Enrico; Dittrich, Helmar; Hübers, Heinz-Wilhelm
2017-04-01
Optical analysis of integrated circuits (IC) is a powerful tool for analyzing security functions that are implemented in an IC. We present a photon emission microscope for picosecond imaging of hot carrier luminescence in ICs in the near-infrared spectral range from 900 to 1700 nm. It allows for a semi-invasive signal tracking in fully operational ICs on the gate or transistor level with a timing precision of approximately 6 ps. The capabilities of the microscope are demonstrated by imaging the operation of two ICs made by 180 and 60 nm process technology.
NASA Astrophysics Data System (ADS)
Sivasubramanian, Kathyayini; Periyasamy, Vijitha; Wen, Kew Kok; Pramanik, Manojit
2017-03-01
Photoacoustic tomography is a hybrid imaging modality that combines optical and ultrasound imaging. It is rapidly gaining attention in the field of medical imaging. The challenge is to translate it into a clinical setup. In this work, we report the development of a handheld clinical photoacoustic imaging system. A clinical ultrasound imaging system is modified to integrate photoacoustic imaging with the ultrasound imaging. Hence, light delivery has been integrated with the ultrasound probe. The angle of light delivery is optimized in this work with respect to the depth of imaging. Optimization was performed based on Monte Carlo simulation for light transport in tissues. Based on the simulation results, the probe holders were fabricated using 3D printing. Similar results were obtained experimentally using phantoms. Phantoms were developed to mimic sentinel lymph node imaging scenario. Also, in vivo sentinel lymph node imaging was done using the same system with contrast agent methylene blue up to a depth of 1.5 cm. The results validate that one can use Monte Carlo simulation as a tool to optimize the probe holder design depending on the imaging needs. This eliminates a trial and error approach generally used for designing a probe holder.
NASA Astrophysics Data System (ADS)
Yang, Huijin; Pan, Bin; Wu, Wenfu; Tai, Jianhao
2018-07-01
Rice is one of the most important cereals in the world. With the change of agricultural land, it is urgently necessary to update information about rice planting areas. This study aims to map rice planting areas with a field-based approach through the integration of multi-temporal Sentinel-1A and Landsat-8 OLI data in Wuhua County of South China where has many basins and mountains. This paper, using multi-temporal SAR and optical images, proposes a methodology for the identification of rice-planting areas. This methodology mainly consists of SSM applied to time series SAR images for the calculation of a similarity measure, image segmentation process applied to the pan-sharpened optical image for the searching of homogenous objects, and the integration of SAR and optical data for the elimination of some speckles. The study compares the per-pixel approach with the per-field approach and the results show that the highest accuracy (91.38%) based on the field-based approach is 1.18% slightly higher than that based on the pixel-based approach for VH polarization, which is brought by eliminating speckle noise through comparing the rice maps of these two approaches. Therefore, the integration of Sentinel-1A and Landsat-8 OLI images with a field-based approach has great potential for mapping rice or other crops' areas.
Impact of digital radiography on clinical workflow.
May, G A; Deer, D D; Dackiewicz, D
2000-05-01
It is commonly accepted that digital radiography (DR) improves workflow and patient throughput compared with traditional film radiography or computed radiography (CR). DR eliminates the film development step and the time to acquire the image from a CR reader. In addition, the wide dynamic range of DR is such that the technologist can perform the quality-control (QC) step directly at the modality in a few seconds, rather than having to transport the newly acquired image to a centralized QC station for review. Furthermore, additional workflow efficiencies can be achieved with DR by employing tight radiology information system (RIS) integration. In the DR imaging environment, this provides for patient demographic information to be automatically downloaded from the RIS to populate the DR Digital Imaging and Communications in Medicine (DICOM) image header. To learn more about this workflow efficiency improvement, we performed a comparative study of workflow steps under three different conditions: traditional film/screen x-ray, DR without RIS integration (ie, manual entry of patient demographics), and DR with RIS integration. This study was performed at the Cleveland Clinic Foundation (Cleveland, OH) using a newly acquired amorphous silicon flat-panel DR system from Canon Medical Systems (Irvine, CA). Our data show that DR without RIS results in substantial workflow savings over traditional film/screen practice. There is an additional 30% reduction in total examination time using DR with RIS integration.
Integration of Irma tactical scene generator into directed-energy weapon system simulation
NASA Astrophysics Data System (ADS)
Owens, Monte A.; Cole, Madison B., III; Laine, Mark R.
2003-08-01
Integrated high-fidelity physics-based simulations that include engagement models, image generation, electro-optical hardware models and control system algorithms have previously been developed by Boeing-SVS for various tracking and pointing systems. These simulations, however, had always used images with featureless or random backgrounds and simple target geometries. With the requirement to engage tactical ground targets in the presence of cluttered backgrounds, a new type of scene generation tool was required to fully evaluate system performance in this challenging environment. To answer this need, Irma was integrated into the existing suite of Boeing-SVS simulation tools, allowing scene generation capabilities with unprecedented realism. Irma is a US Air Force research tool used for high-resolution rendering and prediction of target and background signatures. The MATLAB/Simulink-based simulation achieves closed-loop tracking by running track algorithms on the Irma-generated images, processing the track errors through optical control algorithms, and moving simulated electro-optical elements. The geometry of these elements determines the sensor orientation with respect to the Irma database containing the three-dimensional background and target models. This orientation is dynamically passed to Irma through a Simulink S-function to generate the next image. This integrated simulation provides a test-bed for development and evaluation of tracking and control algorithms against representative images including complex background environments and realistic targets calibrated using field measurements.
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano
2008-01-01
Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177
Jeong, Y J; Oh, T I; Woo, E J; Kim, K J
2017-07-01
Recently, highly flexible and soft pressure distribution imaging sensor is in great demand for tactile sensing, gait analysis, ubiquitous life-care based on activity recognition, and therapeutics. In this study, we integrate the piezo-capacitive and piezo-electric nanowebs with the conductive fabric sheets for detecting static and dynamic pressure distributions on a large sensing area. Electrical impedance tomography (EIT) and electric source imaging are applied for reconstructing pressure distribution images from measured current-voltage data on the boundary of the hybrid fabric sensor. We evaluated the piezo-capacitive nanoweb sensor, piezo-electric nanoweb sensor, and hybrid fabric sensor. The results show the feasibility of static and dynamic pressure distribution imaging from the boundary measurements of the fabric sensors.
Beating heart mitral valve repair with integrated ultrasound imaging
NASA Astrophysics Data System (ADS)
McLeod, A. Jonathan; Moore, John T.; Peters, Terry M.
2015-03-01
Beating heart valve therapies rely extensively on image guidance to treat patients who would be considered inoperable with conventional surgery. Mitral valve repair techniques including the MitrClip, NeoChord, and emerging transcatheter mitral valve replacement techniques rely on transesophageal echocardiography for guidance. These images are often difficult to interpret as the tool will cause shadowing artifacts that occlude tissue near the target site. Here, we integrate ultrasound imaging directly into the NeoChord device. This provides an unobstructed imaging plane that can visualize the valve lea ets as they are engaged by the device and can aid in achieving both a proper bite and spacing between the neochordae implants. A proof of concept user study in a phantom environment is performed to provide a proof of concept for this device.
Integrating image quality in 2nu-SVM biometric match score fusion.
Vatsa, Mayank; Singh, Richa; Noore, Afzel
2007-10-01
This paper proposes an intelligent 2nu-support vector machine based match score fusion algorithm to improve the performance of face and iris recognition by integrating the quality of images. The proposed algorithm applies redundant discrete wavelet transform to evaluate the underlying linear and non-linear features present in the image. A composite quality score is computed to determine the extent of smoothness, sharpness, noise, and other pertinent features present in each subband of the image. The match score and the corresponding quality score of an image are fused using 2nu-support vector machine to improve the verification performance. The proposed algorithm is experimentally validated using the FERET face database and the CASIA iris database. The verification performance and statistical evaluation show that the proposed algorithm outperforms existing fusion algorithms.
Capacitive micromachined ultrasonic transducers for medical imaging and therapy.
Khuri-Yakub, Butrus T; Oralkan, Omer
2011-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications.
Capacitive micromachined ultrasonic transducers for medical imaging and therapy
Khuri-Yakub, Butrus T.; Oralkan, Ömer
2011-01-01
Capacitive micromachined ultrasonic transducers (CMUTs) have been subject to extensive research for the last two decades. Although they were initially developed for air-coupled applications, today their main application space is medical imaging and therapy. This paper first presents a brief description of CMUTs, their basic structure, and operating principles. Our progression of developing several generations of fabrication processes is discussed with an emphasis on the advantages and disadvantages of each process. Monolithic and hybrid approaches for integrating CMUTs with supporting integrated circuits are surveyed. Several prototype transducer arrays with integrated frontend electronic circuits we developed and their use for 2-D and 3-D, anatomical and functional imaging, and ablative therapies are described. The presented results prove the CMUT as a MEMS technology for many medical diagnostic and therapeutic applications. PMID:21860542
Burger, Karin; Koehler, Thomas; Chabior, Michael; Allner, Sebastian; Marschner, Mathias; Fehringer, Andreas; Willner, Marian; Pfeiffer, Franz; Noël, Peter
2014-12-29
Phase-contrast x-ray computed tomography has a high potential to become clinically implemented because of its complementarity to conventional absorption-contrast.In this study, we investigate noise-reducing but resolution-preserving analytical reconstruction methods to improve differential phase-contrast imaging. We apply the non-linear Perona-Malik filter on phase-contrast data prior or post filtered backprojected reconstruction. Secondly, the Hilbert kernel is replaced by regularized iterative integration followed by ramp filtered backprojection as used for absorption-contrast imaging. Combining the Perona-Malik filter with this integration algorithm allows to successfully reveal relevant sample features, quantitatively confirmed by significantly increased structural similarity indices and contrast-to-noise ratios. With this concept, phase-contrast imaging can be performed at considerably lower dose.
3-D surface scan of biological samples with a push-broom imaging spectrometer
USDA-ARS?s Scientific Manuscript database
The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made signifi...
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Image Segmentation Analysis for NASA Earth Science Applications
NASA Technical Reports Server (NTRS)
Tilton, James C.
2010-01-01
NASA collects large volumes of imagery data from satellite-based Earth remote sensing sensors. Nearly all of the computerized image analysis of this data is performed pixel-by-pixel, in which an algorithm is applied directly to individual image pixels. While this analysis approach is satisfactory in many cases, it is usually not fully effective in extracting the full information content from the high spatial resolution image data that s now becoming increasingly available from these sensors. The field of object-based image analysis (OBIA) has arisen in recent years to address the need to move beyond pixel-based analysis. The Recursive Hierarchical Segmentation (RHSEG) software developed by the author is being used to facilitate moving from pixel-based image analysis to OBIA. The key unique aspect of RHSEG is that it tightly intertwines region growing segmentation, which produces spatially connected region objects, with region object classification, which groups sets of region objects together into region classes. No other practical, operational image segmentation approach has this tight integration of region growing object finding with region classification This integration is made possible by the recursive, divide-and-conquer implementation utilized by RHSEG, in which the input image data is recursively subdivided until the image data sections are small enough to successfully mitigat the combinatorial explosion caused by the need to compute the dissimilarity between each pair of image pixels. RHSEG's tight integration of region growing object finding and region classification is what enables the high spatial fidelity of the image segmentations produced by RHSEG. This presentation will provide an overview of the RHSEG algorithm and describe how it is currently being used to support OBIA or Earth Science applications such as snow/ice mapping and finding archaeological sites from remotely sensed data.
Information and image integration: project spectrum
NASA Astrophysics Data System (ADS)
Blaine, G. James; Jost, R. Gilbert; Martin, Lori; Weiss, David A.; Lehmann, Ron; Fritz, Kevin
1998-07-01
The BJC Health System (BJC) and the Washington University School of Medicine (WUSM) formed a technology alliance with industry collaborators to develop and implement an integrated, advanced clinical information system. The industry collaborators include IBM, Kodak, SBC and Motorola. The activity, called Project Spectrum, provides an integrated clinical repository for the multiple hospital facilities of the BJC. The BJC System consists of 12 acute care hospitals serving over one million patients in Missouri and Illinois. An interface engine manages transactions from each of the hospital information systems, lab systems and radiology information systems. Data is normalized to provide a consistent view for the primary care physician. Access to the clinical repository is supported by web-based server/browser technology which delivers patient data to the physician's desktop. An HL7 based messaging system coordinates the acquisition and management of radiological image data and sends image keys to the clinical data repository. Access to the clinical chart browser currently provides radiology reports, laboratory data, vital signs and transcribed medical reports. A chart metaphor provides tabs for the selection of the clinical record for review. Activation of the radiology tab facilitates a standardized view of radiology reports and provides an icon used to initiate retrieval of available radiology images. The selection of the image icon spawns an image browser plug-in and utilizes the image key from the clinical repository to access the image server for the requested image data. The Spectrum system is collecting clinical data from five hospital systems and imaging data from two hospitals. Domain specific radiology imaging systems support the acquisition and primary interpretation of radiology exams. The spectrum clinical workstations are deployed to over 200 sites utilizing local area networks and ISDN connectivity.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
An Integrated Tone Mapping for High Dynamic Range Image Visualization
NASA Astrophysics Data System (ADS)
Liang, Lei; Pan, Jeng-Shyang; Zhuang, Yongjun
2018-01-01
There are two type tone mapping operators for high dynamic range (HDR) image visualization. HDR image mapped by perceptual operators have strong sense of reality, but will lose local details. Empirical operators can maximize local detail information of HDR image, but realism is not strong. A common tone mapping operator suitable for all applications is not available. This paper proposes a novel integrated tone mapping framework which can achieve conversion between empirical operators and perceptual operators. In this framework, the empirical operator is rendered based on improved saliency map, which simulates the visual attention mechanism of the human eye to the natural scene. The results of objective evaluation prove the effectiveness of the proposed solution.
Farrar, Danielle; Budson, Andrew E
2017-04-01
While the relationship between diffusion tensor imaging (DTI) measurements and training effects is explored by Voelker et al. (this issue), a cursory discussion of functional magnetic resonance imaging (fMRI) measurements categorizes increased activation with findings of greater white matter integrity. Evidence of the relationship between fMRI activation and white matter integrity is conflicting, as is the relationship between fMRI activation and training effects. An examination of the changes in fMRI activation in response to training is helpful, but the relationship between DTI and fMRI activation, particularly in the context of white matter changes, must be examined further before general conclusions can be drawn.
75 FR 51277 - Center for Scientific Review; Notice of Closed Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-19
..., Genomes, and Genetics Integrated Review Group; Molecular Genetics B Study Section. Date: October 3-4, 2010... and Urological Systems Integrated Review Group; Clinical, Integrative and Molecular Gastroenterology... Integrated Review Group; Clinical Molecular Imaging and Probe Development. Date: October 4-5, 2010. Time: 7 p...
ERIC Educational Resources Information Center
Munoz, Karen E.; Hyde, Luke W.; Hariri, Ahmad R.
2009-01-01
Imaging genetics is an experimental strategy that integrates molecular genetics and neuroimaging technology to examine biological mechanisms that mediate differences in behavior and the risks for psychiatric disorder. The basic principles in imaging genetics and the development of the field are discussed.
Lossless compression of VLSI layout image data.
Dai, Vito; Zakhor, Avideh
2006-09-01
We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.
NASA Astrophysics Data System (ADS)
Kim, Moon S.; Cho, Byoung-Kwan; Yang, Chun-Chieh; Chao, Kaunglin; Lefcourt, Alan M.; Chen, Yud-Ren
2006-10-01
We have developed nondestructive opto-electronic imaging techniques for rapid assessment of safety and wholesomeness of foods. A recently developed fast hyperspectral line-scan imaging system integrated with a commercial apple-sorting machine was evaluated for rapid detection of animal feces matter on apples. Apples obtained from a local orchard were artificially contaminated with cow feces. For the online trial, hyperspectral images with 60 spectral channels, reflectance in the visible to near infrared regions and fluorescence emissions with UV-A excitation, were acquired from apples moving at a processing sorting-line speed of three apples per second. Reflectance and fluorescence imaging required a passive light source, and each method used independent continuous wave (CW) light sources. In this paper, integration of the hyperspectral imaging system with the commercial applesorting machine and preliminary results for detection of fecal contamination on apples, mainly based on the fluorescence method, are presented.
An integrated content and metadata based retrieval system for art.
Lewis, Paul H; Martinez, Kirk; Abas, Fazly Salleh; Fauzi, Mohammad Faizal Ahmad; Chan, Stephen C Y; Addis, Matthew J; Boniface, Mike J; Grimwood, Paul; Stevenson, Alison; Lahanier, Christian; Stevenson, James
2004-03-01
A new approach to image retrieval is presented in the domain of museum and gallery image collections. Specialist algorithms, developed to address specific retrieval tasks, are combined with more conventional content and metadata retrieval approaches, and implemented within a distributed architecture to provide cross-collection searching and navigation in a seamless way. External systems can access the different collections using interoperability protocols and open standards, which were extended to accommodate content based as well as text based retrieval paradigms. After a brief overview of the complete system, we describe the novel design and evaluation of some of the specialist image analysis algorithms including a method for image retrieval based on sub-image queries, retrievals based on very low quality images and retrieval using canvas crack patterns. We show how effective retrieval results can be achieved by real end-users consisting of major museums and galleries, accessing the distributed but integrated digital collections.
Photoacoustic-Based Multimodal Nanoprobes: from Constructing to Biological Applications.
Gao, Duyang; Yuan, Zhen
2017-01-01
Multimodal nanoprobes have attracted intensive attentions since they can integrate various imaging modalities to obtain complementary merits of single modality. Meanwhile, recent interest in laser-induced photoacoustic imaging is rapidly growing due to its unique advantages in visualizing tissue structure and function with high spatial resolution and satisfactory imaging depth. In this review, we summarize multimodal nanoprobes involving photoacoustic imaging. In particular, we focus on the method to construct multimodal nanoprobes. We have divided the synthetic methods into two types. First, we call it "one for all" concept, which involves intrinsic properties of the element in a single particle. Second, "all in one" concept, which means integrating different functional blocks in one particle. Then, we simply introduce the applications of the multifunctional nanoprobes for in vivo imaging and imaging-guided tumor therapy. At last, we discuss the advantages and disadvantages of the present methods to construct the multimodal nanoprobes and share our viewpoints in this area.
First Observations from the Multi-Application Solar Telescope (MAST) Narrow-Band Imager
NASA Astrophysics Data System (ADS)
Mathew, Shibu K.; Bayanna, Ankala Raja; Tiwary, Alok Ranjan; Bireddy, Ramya; Venkatakrishnan, Parameswaran
2017-08-01
The Multi-Application Solar Telescope is a 50 cm off-axis Gregorian telescope recently installed at the Udaipur Solar Observatory, India. In order to obtain near-simultaneous observations at photospheric and chromospheric heights, an imager optimized for two or more wavelengths is being integrated with the telescope. Two voltage-tuneable lithium-niobate Fabry-Perot etalons along with a set of interference blocking filters have been used for developing the imager. Both of the etalons are used in tandem for photospheric observations in Fe i 6173 Å and chromospheric observation in Hα 6563 Å spectral lines, whereas only one of the etalons is used for the chromospheric Ca II line at 8542 Å. The imager is also being used for spectropolarimetric observations. We discuss the characterization of the etalons at the above wavelengths, detail the integration of the imager with the telescope, and present a few sets of observations taken with the imager set-up.
Hong, Xun Jie Jeesmond; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin
2017-01-01
Abstract. A flexible handheld imaging probe consisting of a 3 mm×3 mm charge-coupled device camera, light-emitting diode light sources, and near-infrared laser source is designed and developed. The imaging probe is designed with specifications to capture the iridocorneal angle images and posterior segment images. Light propagation from the anterior chamber of the eye to the exterior is considered analytically using Snell’s law. Imaging of the iridocorneal angle region and fundus is performed on ex vivo porcine samples and subsequently on small laboratory animals, such as the New Zealand white rabbit and nonhuman primate, in vivo. The integrated flexible handheld probe demonstrates high repeatability in iridocorneal angle and fundus documentation. The proposed concept and methodology are expected to find potential application in the diagnosis, prognosis, and management of glaucoma. PMID:28413809
Jaeger, Michael; Bamber, Jeffrey C.; Frenz, Martin
2013-01-01
This paper investigates a novel method which allows clutter elimination in deep optoacoustic imaging. Clutter significantly limits imaging depth in clinical optoacoustic imaging, when irradiation optics and ultrasound detector are integrated in a handheld probe for flexible imaging of the human body. Strong optoacoustic transients generated at the irradiation site obscure weak signals from deep inside the tissue, either directly by propagating towards the probe, or via acoustic scattering. In this study we demonstrate that signals of interest can be distinguished from clutter by tagging them at the place of origin with localised tissue vibration induced by the acoustic radiation force in a focused ultrasonic beam. We show phantom results where this technique allowed almost full clutter elimination and thus strongly improved contrast for deep imaging. Localised vibration tagging by means of acoustic radiation force is especially promising for integration into ultrasound systems that already have implemented radiation force elastography. PMID:25302147
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Zheng; Ukida, H.; Ramuhalli, Pradeep
2010-06-05
Imaging- and vision-based techniques play an important role in industrial inspection. The sophistication of the techniques assures high- quality performance of the manufacturing process through precise positioning, online monitoring, and real-time classification. Advanced systems incorporating multiple imaging and/or vision modalities provide robust solutions to complex situations and problems in industrial applications. A diverse range of industries, including aerospace, automotive, electronics, pharmaceutical, biomedical, semiconductor, and food/beverage, etc., have benefited from recent advances in multi-modal imaging, data fusion, and computer vision technologies. Many of the open problems in this context are in the general area of image analysis methodologies (preferably in anmore » automated fashion). This editorial article introduces a special issue of this journal highlighting recent advances and demonstrating the successful applications of integrated imaging and vision technologies in industrial inspection.« less
Larkin, Kieran G; Fletcher, Peter A
2014-03-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform.
Larkin, Kieran G.; Fletcher, Peter A.
2014-01-01
X-ray Talbot moiré interferometers can now simultaneously generate two differential phase images of a specimen. The conventional approach to integrating differential phase is unstable and often leads to images with loss of visible detail. We propose a new reconstruction method based on the inverse Riesz transform. The Riesz approach is stable and the final image retains visibility of high resolution detail without directional bias. The outline Riesz theory is developed and an experimentally acquired X-ray differential phase data set is presented for qualitative visual appraisal. The inverse Riesz phase image is compared with two alternatives: the integrated (quantitative) phase and the modulus of the gradient of the phase. The inverse Riesz transform has the computational advantages of a unitary linear operator, and is implemented directly as a complex multiplication in the Fourier domain also known as the spiral phase transform. PMID:24688823
GLO-Roots: an imaging platform enabling multidimensional characterization of soil-grown root systems
Rellán-Álvarez, Rubén; Lobet, Guillaume; Lindner, Heike; Pradier, Pierre-Luc; Sebastian, Jose; Yee, Muh-Ching; Geng, Yu; Trontin, Charlotte; LaRue, Therese; Schrager-Lavelle, Amanda; Haney, Cara H; Nieu, Rita; Maloof, Julin; Vogel, John P; Dinneny, José R
2015-01-01
Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow the spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes. DOI: http://dx.doi.org/10.7554/eLife.07597.001 PMID:26287479
MuSICa image slicer prototype at 1.5-m GREGOR solar telescope
NASA Astrophysics Data System (ADS)
Calcines, A.; López, R. L.; Collados, M.; Vega Reyes, N.
2014-07-01
Integral Field Spectroscopy is an innovative technique that is being implemented in the state-of-the-art instruments of the largest night-time telescopes, however, it is still a novelty for solar instrumentation. A new concept of image slicer, called MuSICa (Multi-Slit Image slicer based on collimator-Camera), has been designed for the integral field spectrograph of the 4-m European Solar Telescope. This communication presents an image slicer prototype of MuSICa for GRIS, the spectrograph of the 1.5-m GREGOR solar telescope located at the Observatory of El Teide. MuSICa at GRIS reorganizes a 2-D field of view of 24.5 arcsec into a slit of 0.367 arcsec width by 66.76 arcsec length distributed horizontally. It will operate together with the TIP-II polarimeter to offer high resolution integral field spectropolarimetry. It will also have a bidimensional field of view scanning system to cover a field of view up to 1 by 1 arcmin.
Brama, Elisabeth; Peddie, Christopher J; Wilkes, Gary; Gu, Yan; Collinson, Lucy M; Jones, Martin L
2016-12-13
In-resin fluorescence (IRF) protocols preserve fluorescent proteins in resin-embedded cells and tissues for correlative light and electron microscopy, aiding interpretation of macromolecular function within the complex cellular landscape. Dual-contrast IRF samples can be imaged in separate fluorescence and electron microscopes, or in dual-modality integrated microscopes for high resolution correlation of fluorophore to organelle. IRF samples also offer a unique opportunity to automate correlative imaging workflows. Here we present two new locator tools for finding and following fluorescent cells in IRF blocks, enabling future automation of correlative imaging. The ultraLM is a fluorescence microscope that integrates with an ultramicrotome, which enables 'smart collection' of ultrathin sections containing fluorescent cells or tissues for subsequent transmission electron microscopy or array tomography. The miniLM is a fluorescence microscope that integrates with serial block face scanning electron microscopes, which enables 'smart tracking' of fluorescent structures during automated serial electron image acquisition from large cell and tissue volumes.
GLO-Roots: An imaging platform enabling multidimensional characterization of soil-grown root systems
Rellan-Alvarez, Ruben; Lobet, Guillaume; Lindner, Heike; ...
2015-08-19
Root systems develop different root types that individually sense cues from their local environment and integrate this information with systemic signals. This complex multi-dimensional amalgam of inputs enables continuous adjustment of root growth rates, direction, and metabolic activity that define a dynamic physical network. Current methods for analyzing root biology balance physiological relevance with imaging capability. To bridge this divide, we developed an integrated-imaging system called Growth and Luminescence Observatory for Roots (GLO-Roots) that uses luminescence-based reporters to enable studies of root architecture and gene expression patterns in soil-grown, light-shielded roots. We have developed image analysis algorithms that allow themore » spatial integration of soil properties, gene expression, and root system architecture traits. We propose GLO-Roots as a system that has great utility in presenting environmental stimuli to roots in ways that evoke natural adaptive responses and in providing tools for studying the multi-dimensional nature of such processes.« less
Qualitative evaluation of titanium implant integration into bone by diffraction enhanced imaging.
Wagner, A; Sachse, A; Keller, M; Aurich, M; Wetzel, W-D; Hortschansky, P; Schmuck, K; Lohmann, M; Reime, B; Metge, J; Arfelli, F; Menk, R; Rigon, L; Muehleman, C; Bravin, A; Coan, P; Mollenhauer, J
2006-03-07
Diffraction enhanced imaging (DEI) uses refraction of x-rays at edges, which allows pronounced visualization of material borders and rejects scattering which often obscures edges and blurs images. Here, the first evidence is presented that, using DEI, a destruction-free evaluation of the quality of integration of metal implants into bone is possible. Experiments were performed in rabbits and sheep with model implants to investigate the option for DEI as a tool in implant research. The results obtained from DEI were compared to conventional histology obtained from the specimens. DE images allow the identification of the quality of ingrowth of bone into the hydroxyapatite layer of the implant. Incomplete integration of the implant with a remaining gap of less than 0.3 mm caused the presence of a highly refractive edge at the implant/bone border. In contrast, implants with bone fully grown onto the surface did not display a refractive signal. Therefore, the refractive signal could be utilized to diagnose implant healing and/or loosening.
NASA Technical Reports Server (NTRS)
Tschunko, H. F. A.
1983-01-01
Reference is made to a study by Tschunko (1979) in which it was discussed how apodization modifies the modulation transfer function for various central obstruction ratios. It is shown here how apodization, together with the central obstruction ratio, modifies the point spread function, which is the basic element for the comparison of imaging performance and for the derivation of energy integrals and other functions. At high apodization levels and lower central obstruction (less than 0.1), new extended radial zones are formed in the outer part of the central ring groups. These transmutation of the image functions are of more than theoretical interest, especially if the irradiance levels in the outer ring zones are to be compared to the background irradiance levels. Attention is then given to the energy distribution in point images generated by annular apertures apodized by various transmission functions. The total energy functions are derived; partial energy integrals are determined; and background irradiance functions are discussed.
Compton camera imaging and the cone transform: a brief overview
NASA Astrophysics Data System (ADS)
Terzioglu, Fatma; Kuchment, Peter; Kunyansky, Leonid
2018-05-01
While most of Radon transform applications to imaging involve integrations over smooth sub-manifolds of the ambient space, lately important situations have appeared where the integration surfaces are conical. Three of such applications are single scatter optical tomography, Compton camera medical imaging, and homeland security. In spite of the similar surfaces of integration, the data and the inverse problems associated with these modalities differ significantly. In this article, we present a brief overview of the mathematics arising in Compton camera imaging. In particular, the emphasis is made on the overdetermined data and flexible geometry of the detectors. For the detailed results, as well as other approaches (e.g. smaller-dimensional data or restricted geometry of detectors) the reader is directed to the relevant publications. Only a brief description and some references are provided for the single scatter optical tomography. This work was supported in part by NSF DMS grants 1211463 (the first two authors), 1211521 and 141877 (the third author), as well as a College of Science of Texas A&M University grant.
Qualitative evaluation of titanium implant integration into bone by diffraction enhanced imaging
NASA Astrophysics Data System (ADS)
Wagner, A.; Sachse, A.; Keller, M.; Aurich, M.; Wetzel, W.-D.; Hortschansky, P.; Schmuck, K.; Lohmann, M.; Reime, B.; Metge, J.; Arfelli, F.; Menk, R.; Rigon, L.; Muehleman, C.; Bravin, A.; Coan, P.; Mollenhauer, J.
2006-03-01
Diffraction enhanced imaging (DEI) uses refraction of x-rays at edges, which allows pronounced visualization of material borders and rejects scattering which often obscures edges and blurs images. Here, the first evidence is presented that, using DEI, a destruction-free evaluation of the quality of integration of metal implants into bone is possible. Experiments were performed in rabbits and sheep with model implants to investigate the option for DEI as a tool in implant research. The results obtained from DEI were compared to conventional histology obtained from the specimens. DE images allow the identification of the quality of ingrowth of bone into the hydroxyapatite layer of the implant. Incomplete integration of the implant with a remaining gap of less than 0.3 mm caused the presence of a highly refractive edge at the implant/bone border. In contrast, implants with bone fully grown onto the surface did not display a refractive signal. Therefore, the refractive signal could be utilized to diagnose implant healing and/or loosening.
Integrated light and scanning electron microscopy of GFP-expressing cells.
Peddie, Christopher J; Liv, Nalan; Hoogenboom, Jacob P; Collinson, Lucy M
2014-01-01
Integration of light and electron microscopes provides imaging tools in which fluorescent proteins can be localized to cellular structures with a high level of precision. However, until recently, there were few methods that could deliver specimens with sufficient fluorescent signal and electron contrast for dual imaging without intermediate staining steps. Here, we report protocols that preserve green fluorescent protein (GFP) in whole cells and in ultrathin sections of resin-embedded cells, with membrane contrast for integrated imaging. Critically, GFP is maintained in a stable and active state within the vacuum of an integrated light and scanning electron microscope. For light microscopists, additional structural information gives context to fluorescent protein expression in whole cells, illustrated here by analysis of filopodia and focal adhesions in Madin Darby canine kidney cells expressing GFP-Paxillin. For electron microscopists, GFP highlights the proteins of interest within the architectural space of the cell, illustrated here by localization of the conical lipid diacylglycerol to cellular membranes. © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Ando, K. J.
1971-01-01
Description of the performance of the silicon diode array vidicon - an imaging sensor which possesses wide spectral response, high quantum efficiency, and linear response. These characteristics, in addition to its inherent ruggedness, simplicity, and long-term stability and operating life make this device potentially of great usefulness for ground-base and spaceborne planetary and stellar imaging applications. However, integration and charged storage for periods greater than approximately five seconds are not possible at room temperature because of diode saturation from dark current buildup. Since dark current can be reduced by cooling, measurements were made in the range from -65 to 25 C. Results are presented on the extension of integration, storage, and slow scan capabilities achievable by cooling. Integration times in excess of 20 minutes were achieved at the lowest temperatures. The measured results are compared with results obtained with other types of sensors and the advantages of the silicon diode array vidicon for imaging applications are discussed.
Images Every American Should Know: Developing the "Cultural Image Literacy Assessment-USA"
ERIC Educational Resources Information Center
Emanuel, Richard; Baker, Kim; Challons-Lipton, Siu
2016-01-01
This paper describes the evolution of the "Cultural Image Literacy Assessment-USA"©. This assessment represents an important first step in measuring image literacy within a culture. Visual literacy is an integral part of all cultures. The framework used in creating an assessment of cultural image literacy in the United States could be…
Electronic photography at NASA Langley Research Center
NASA Technical Reports Server (NTRS)
Holm, Jack M.
1994-01-01
The field of photography began a metamorphosis several years ago which promises to fundamentally change how images are captured, transmitted, and output. At this time the metamorphosis is still in the early stages, but already new processes, hardware, and software are allowing many individuals and organizations to explore the entry of imaging into the information revolution. Exploration at this time is prerequisite to leading expertise in the future, and a number of branches at LaRC have ventured into electronic and digital imaging. Their progress until recently has been limited by two factors: the lack of an integrated approach and the lack of an electronic photographic capability. The purpose of the research conducted was to address these two items. In some respects, the lack of electronic photographs has prevented application of an integrated imaging approach. Since everything could not be electronic, the tendency was to work with hard copy. Over the summer, the Photographics Section has set up an Electronic Photography Laboratory. This laboratory now has the capability to scan film images, process the images, and output the images in a variety of forms. Future plans also include electronic capture capability. The current forms of image processing available include sharpening, noise reduction, dust removal, tone correction, color balancing, image editing, cropping, electronic separations, and halftoning. Output choices include customer specified electronic file formats which can be output on magnetic or optical disks or over the network, 4400 line photographic quality prints and transparencies to 8.5 by 11 inches, and 8000 line film negatives and transparencies to 4 by 5 inches. The problem of integrated imaging involves a number of branches at LaRC including Visual Imaging, Research Printing and Publishing, Data Visualization and Animation, Advanced Computing, and various research groups. These units must work together to develop common approaches to image processing and archiving. The ultimate goal is to be able to search for images using an on-line database and image catalog. These images could then be retrieved over the network as needed, along with information on the acquisition and processing prior to storage. For this goal to be realized, a number of standard processing protocols must be developed to allow the classification of images into categories. Standard series of processing algorithms can then be applied to each category (although many of these may be adaptive between images). Since the archived image files would be standardized, it should also be possible to develop standard output processing protocols for a number of output devices. If LaRC continues the research effort begun this summer, it may be one of the first organizations to develop an integrated approach to imaging. As such, it could serve as a model for other organizations in government and the private sector.
Wei, Chen-Wei; Nguyen, Thu-Mai; Xia, Jinjun; Arnal, Bastien; Wong, Emily Y.; Pelivanov, Ivan M.; O’Donnell, Matthew
2015-01-01
Because of depth-dependent light attenuation, bulky, low-repetition-rate lasers are usually used in most photoacoustic (PA) systems to provide sufficient pulse energies to image at depth within the body. However, integrating these lasers with real-time clinical ultrasound (US) scanners has been problematic because of their size and cost. In this paper, an integrated PA/US (PAUS) imaging system is presented operating at frame rates >30 Hz. By employing a portable, low-cost, low-pulse-energy (~2 mJ/pulse), high-repetition-rate (~1 kHz), 1053-nm laser, and a rotating galvo-mirror system enabling rapid laser beam scanning over the imaging area, the approach is demonstrated for potential applications requiring a few centimeters of penetration. In particular, we demonstrate here real-time (30 Hz frame rate) imaging (by combining multiple single-shot sub-images covering the scan region) of an 18-gauge needle inserted into a piece of chicken breast with subsequent delivery of an absorptive agent at more than 1-cm depth to mimic PAUS guidance of an interventional procedure. A signal-to-noise ratio of more than 35 dB is obtained for the needle in an imaging area 2.8 × 2.8 cm (depth × lateral). Higher frame rate operation is envisioned with an optimized scanning scheme. PMID:25643081
Multifunctional Catheters Combining Intracardiac Ultrasound Imaging and Electrophysiology Sensing
Stephens, Douglas N.; Cannata, Jonathan; Liu, Ruibin; Zhao, Jian Zhong; Shung, K. Kirk; Nguyen, Hien; Chia, Raymond; Dentinger, Aaron; Wildes, Douglas; Thomenius, Kai E.; Mahajan, Aman; Shivkumar, Kalyanam; Kim, Kang; O’Donnell, Matthew; Nikoozadeh, Amin; Oralkan, Omer; Khuri-Yakub, Pierre T.; Sahn, David J.
2015-01-01
A family of 3 multifunctional intracardiac imaging and electrophysiology (EP) mapping catheters has been in development to help guide diagnostic and therapeutic intracardiac EP procedures. The catheter tip on the first device includes a 7.5 MHz, 64-element, side-looking phased array for high resolution sector scanning. The second device is a forward-looking catheter with a 24-element 14 MHz phased array. Both of these catheters operate on a commercial imaging system with standard software. Multiple EP mapping sensors were mounted as ring electrodes near the arrays for electrocardiographic synchronization of ultrasound images and used for unique integration with EP mapping technologies. To help establish the catheters’ ability for integration with EP interventional procedures, tests were performed in vivo in a porcine animal model to demonstrate both useful intracardiac echocardiographic (ICE) visualization and simultaneous 3-D positional information using integrated electroanatomical mapping techniques. The catheters also performed well in high frame rate imaging, color flow imaging, and strain rate imaging of atrial and ventricular structures. The companion paper of this work discusses the catheter design of the side-looking catheter with special attention to acoustic lens design. The third device in development is a 10 MHz forward-looking ring array that is to be mounted at the distal tip of a 9F catheter to permit use of the available catheter lumen for adjunctive therapy tools. PMID:18986948
Multifunctional catheters combining intracardiac ultrasound imaging and electrophysiology sensing.
Stephens, D N; Cannata, J; Liu, Ruibin; Zhao, Jian Zhong; Shung, K K; Nguyen, Hien; Chia, R; Dentinger, A; Wildes, D; Thomenius, K E; Mahajan, A; Shivkumar, K; Kim, Kang; O'Donnell, M; Nikoozadeh, A; Oralkan, O; Khuri-Yakub, P T; Sahn, D J
2008-07-01
A family of 3 multifunctional intracardiac imaging and electrophysiology (EP) mapping catheters has been in development to help guide diagnostic and therapeutic intracardiac EP procedures. The catheter tip on the first device includes a 7.5 MHz, 64-element, side-looking phased array for high resolution sector scanning. The second device is a forward-looking catheter with a 24-element 14 MHz phased array. Both of these catheters operate on a commercial imaging system with standard software. Multiple EP mapping sensors were mounted as ring electrodes near the arrays for electrocardiographic synchronization of ultrasound images and used for unique integration with EP mapping technologies. To help establish the catheters' ability for integration with EP interventional procedures, tests were performed in vivo in a porcine animal model to demonstrate both useful intracardiac echocardiographic (ICE) visualization and simultaneous 3-D positional information using integrated electroanatomical mapping techniques. The catheters also performed well in high frame rate imaging, color flow imaging, and strain rate imaging of atrial and ventricular structures. The companion paper of this work discusses the catheter design of the side-looking catheter with special attention to acoustic lens design. The third device in development is a 10 MHz forward-looking ring array that is to be mounted at the distal tip of a 9F catheter to permit use of the available catheter lumen for adjunctive therapy tools.
Integration of DICOM and openEHR standards
NASA Astrophysics Data System (ADS)
Wang, Ying; Yao, Zhihong; Liu, Lei
2011-03-01
The standard format for medical imaging storage and transmission is DICOM. openEHR is an open standard specification in health informatics that describes the management and storage, retrieval and exchange of health data in electronic health records. Considering that the integration of DICOM and openEHR is beneficial to information sharing, on the basis of XML-based DICOM format, we developed a method of creating a DICOM Imaging Archetype in openEHR to enable the integration of DICOM and openEHR. Each DICOM file contains abundant imaging information. However, because reading a DICOM involves looking up the DICOM Data Dictionary, the readability of a DICOM file has been limited. openEHR has innovatively adopted two level modeling method, making clinical information divided into lower level, the information model, and upper level, archetypes and templates. But one critical challenge posed to the development of openEHR is the information sharing problem, especially in imaging information sharing. For example, some important imaging information cannot be displayed in an openEHR file. In this paper, to enhance the readability of a DICOM file and semantic interoperability of an openEHR file, we developed a method of mapping a DICOM file to an openEHR file by adopting the form of archetype defined in openEHR. Because an archetype has a tree structure, after mapping a DICOM file to an openEHR file, the converted information is structuralized in conformance with openEHR format. This method enables the integration of DICOM and openEHR and data exchange without losing imaging information between two standards.
Wang, Jianfeng; Zheng, Wei; Lin, Kan; Huang, Zhiwei
2016-01-01
We report the development and implementation of a unique integrated Mueller-matrix (MM) near-infrared (NIR) imaging and Mueller-matrix point-wise diffuse reflectance (DR) spectroscopy technique for improving colonic cancer detection and diagnosis. Point-wise MM DR spectra can be acquired from any suspicious tissue areas indicated by MM imaging. A total of 30 paired colonic tissue specimens (normal vs. cancer) were measured using the integrated MM imaging and point-wise MM DR spectroscopy system. Polar decomposition algorithms are employed on the acquired images and spectra to derive three polarization metrics including depolarization, diattentuation and retardance for colonic tissue characterization. The decomposition results show that tissue depolarization and retardance are significantly decreased (p<0.001, paired 2-sided Student’s t-test, n = 30); while the tissue diattentuation is significantly increased (p<0.001, paired 2-sided Student’s t-test, n = 30) associated with colonic cancer. Further partial least squares discriminant analysis (PLS-DA) and leave-one tissue site-out, cross validation (LOSCV) show that the combination of the three polarization metrics provide the best diagnostic accuracy of 95.0% (sensitivity: 93.3%, and specificity: 96.7%) compared to either of the three polarization metrics (sensitivities of 93.3%, 83.3%, and 80.0%; and specificities of 90.0%, 96.7%, and 80.0%, respectively, for the depolarization, diattentuation and retardance metrics) for colonic cancer detection. This work suggests that the integrated MM NIR imaging and point-wise MM NIR diffuse reflectance spectroscopy has the potential to improve the early detection and diagnosis of malignant lesions in the colon. PMID:27446640
Storage and distribution of pathology digital images using integrated web-based viewing systems.
Marchevsky, Alberto M; Dulbandzhyan, Ronda; Seely, Kevin; Carey, Steve; Duncan, Raymond G
2002-05-01
Health care providers have expressed increasing interest in incorporating digital images of gross pathology specimens and photomicrographs in routine pathology reports. To describe the multiple technical and logistical challenges involved in the integration of the various components needed for the development of a system for integrated Web-based viewing, storage, and distribution of digital images in a large health system. An Oracle version 8.1.6 database was developed to store, index, and deploy pathology digital photographs via our Intranet. The database allows for retrieval of images by patient demographics or by SNOMED code information. The Intranet of a large health system accessible from multiple computers located within the medical center and at distant private physician offices. The images can be viewed using any of the workstations of the health system that have authorized access to our Intranet, using a standard browser or a browser configured with an external viewer or inexpensive plug-in software, such as Prizm 2.0. The images can be printed on paper or transferred to film using a digital film recorder. Digital images can also be displayed at pathology conferences by using wireless local area network (LAN) and secure remote technologies. The standardization of technologies and the adoption of a Web interface for all our computer systems allows us to distribute digital images from a pathology database to a potentially large group of users distributed in multiple locations throughout a large medical center.
Polarimetric Imaging System for Automatic Target Detection and Recognition
2000-03-01
technique shown in Figure 4(b) can also be used to integrate polarizer arrays with other types of imaging sensors, such as LWIR cameras and uncooled...vertical stripe pattern in this φ image is caused by nonuniformities in the particular polarizer array used. 2. CIRCULAR POLARIZATION IMAGING USING
Assessment of visual communication by information theory
NASA Astrophysics Data System (ADS)
Huck, Friedrich O.; Fales, Carl L.
1994-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Transpersonal Psychology: Guiding Image for the Advancement of International Adult Education.
ERIC Educational Resources Information Center
Boucouvalas, Marcie
1984-01-01
The importance of guiding images is examined, along with analyses of the images of humankind and worldviews previously offered by psychology and adopted by society-at-large. The article focuses on the contribution of transpersonal psychology, the discipline's fourth force, which integrates and extends prior guiding images. (CT)
Integrating Digital Images into the Art and Art History Curriculum.
ERIC Educational Resources Information Center
Pitt, Sharon P.; Updike, Christina B.; Guthrie, Miriam E.
2002-01-01
Describes an Internet-based image database system connected to a flexible, in-class teaching and learning tool (the Madison Digital Image Database) developed at James Madison University to bring digital images to the arts and humanities classroom. Discusses content, copyright issues, ensuring system effectiveness, instructional impact, sharing the…
Information theoretic approach for assessing image fidelity in photon-counting arrays.
Narravula, Srikanth R; Hayat, Majeed M; Javidi, Bahram
2010-02-01
The method of photon-counting integral imaging has been introduced recently for three-dimensional object sensing, visualization, recognition and classification of scenes under photon-starved conditions. This paper presents an information-theoretic model for the photon-counting imaging (PCI) method, thereby providing a rigorous foundation for the merits of PCI in terms of image fidelity. This, in turn, can facilitate our understanding of the demonstrated success of photon-counting integral imaging in compressive imaging and classification. The mutual information between the source and photon-counted images is derived in a Markov random field setting and normalized by the source-image's entropy, yielding a fidelity metric that is between zero and unity, which respectively corresponds to complete loss of information and full preservation of information. Calculations suggest that the PCI fidelity metric increases with spatial correlation in source image, from which we infer that the PCI method is particularly effective for source images with high spatial correlation; the metric also increases with the reduction in photon-number uncertainty. As an application to the theory, an image-classification problem is considered showing a congruous relationship between the fidelity metric and classifier's performance.
Kuzmak, P. M.; Dayhoff, R. E.
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner. PMID:1482906
Kuzmak, P M; Dayhoff, R E
1992-01-01
There is a wide range of requirements for digital hospital imaging systems. Radiology needs very high resolution black and white images. Other diagnostic disciplines need high resolution color imaging capabilities. Images need to be displayed in many locations throughout the hospital. Different imaging systems within a hospital need to cooperate in order to show the whole picture. At the Baltimore VA Medical Center, the DHCP Integrated Imaging System and a commercial Picture Archiving and Communication System (PACS) work in concert to provide a wide-range of departmental and hospital-wide imaging capabilities. An interface between the DHCP and the Siemens-Loral PACS systems enables patient text and image data to be passed between the two systems. The interface uses ACR-NEMA 2.0 Standard messages extended with shadow groups based on draft ACR-NEMA 3.0 prototypes. A Novell file server, accessible to both systems via Ethernet, is used to communicate all the messages. Patient identification information, orders, ADT, procedure status, changes, patient reports, and images are sent between the two systems across the interface. The systems together provide an extensive set of imaging capabilities for both the specialist and the general practitioner.
From Panoramic Photos to a Low-Cost Photogrammetric Workflow for Cultural Heritage 3d Documentation
NASA Astrophysics Data System (ADS)
D'Annibale, E.; Tassetti, A. N.; Malinverni, E. S.
2013-07-01
The research aims to optimize a workflow of architecture documentation: starting from panoramic photos, tackling available instruments and technologies to propose an integrated, quick and low-cost solution of Virtual Architecture. The broader research background shows how to use spherical panoramic images for the architectural metric survey. The input data (oriented panoramic photos), the level of reliability and Image-based Modeling methods constitute an integrated and flexible 3D reconstruction approach: from the professional survey of cultural heritage to its communication in virtual museum. The proposed work results from the integration and implementation of different techniques (Multi-Image Spherical Photogrammetry, Structure from Motion, Imagebased Modeling) with the aim to achieve high metric accuracy and photorealistic performance. Different documentation chances are possible within the proposed workflow: from the virtual navigation of spherical panoramas to complex solutions of simulation and virtual reconstruction. VR tools make for the integration of different technologies and the development of new solutions for virtual navigation. Image-based Modeling techniques allow 3D model reconstruction with photo realistic and high-resolution texture. High resolution of panoramic photo and algorithms of panorama orientation and photogrammetric restitution vouch high accuracy and high-resolution texture. Automated techniques and their following integration are subject of this research. Data, advisably processed and integrated, provide different levels of analysis and virtual reconstruction joining the photogrammetric accuracy to the photorealistic performance of the shaped surfaces. Lastly, a new solution of virtual navigation is tested. Inside the same environment, it proposes the chance to interact with high resolution oriented spherical panorama and 3D reconstructed model at once.
Are We Correctly Measuring Star-Formation Rates?
NASA Astrophysics Data System (ADS)
McQuinn, Kristen B.; Skillman, Evan D.; Dolphin, Andrew E.; Mitchell, Noah P.
2017-01-01
Integrating our knowledge of star formation (SF) traced by observations at different wavelengths is essential for correctly interpreting and comparing SF activity in a variety of systems and environments. This study compares extinction-corrected, integrated ultraviolet (UV) emission from resolved galaxies with color-magnitude diagram (CMD) based star-formation rates (SFRs) derived from resolved stellar populations and CMD fitting techniques in 19 nearby starburst and post-starburst dwarf galaxies. The data sets are from the panchromatic Starburst Irregular Dwarf Survey (STARBIRDS) and include deep legacy GALEX UV imaging, Hubble Space Telescope optical imaging, and Spitzer MIPS imaging. For the majority of the sample, the integrated near-UV fluxes predicted from the CMD-based SFRs—using four different models—agree with the measured, extinction corrected, integrated near-UV fluxes from GALEX images, but the far-UV (FUV) predicted fluxes do not. Furthermore, we find a systematic deviation between the SFRs based on integrated FUV luminosities and existing scaling relations, and the SFRs based on the resolved stellar populations. This offset is not driven by different SF timescales, variations in SFRs, UV attenuation, nor stochastic effects. This first comparison between CMD-based SFRs and an integrated FUV emission SFR indicator suggests that the most likely cause of the discrepancy is the theoretical FUV-SFR calibration from stellar evolutionary libraries and/or stellar atmospheric models. We present an empirical calibration of the FUV-based SFR relation for dwarf galaxies, with uncertainties, which is ˜53% larger than previous relations. These results have signficant implications for measuring FUV-based SFRs of high-redshift galaxies.
NASA Astrophysics Data System (ADS)
Song, Wei; Xu, Qiang; Zhang, Yang; Zhan, Yang; Zheng, Wei; Song, Liang
2016-08-01
The ability to obtain comprehensive structural and functional information from intact biological tissue in vivo is highly desirable for many important biomedical applications, including cancer and brain studies. Here, we developed a fully integrated multimodal microscopy that can provide photoacoustic (optical absorption), two-photon (fluorescence), and second harmonic generation (SHG) information from tissue in vivo, with intrinsically co-registered images. Moreover, using a delicately designed optical-acoustic coupling configuration, a high-frequency miniature ultrasonic transducer was integrated into a water-immersion optical objective, thus allowing all three imaging modalities to provide a high lateral resolution of ~290 nm with reflection-mode imaging capability, which is essential for studying intricate anatomy, such as that of the brain. Taking advantage of the complementary and comprehensive contrasts of the system, we demonstrated high-resolution imaging of various tissues in living mice, including microvasculature (by photoacoustics), epidermis cells, cortical neurons (by two-photon fluorescence), and extracellular collagen fibers (by SHG). The intrinsic image co-registration of the three modalities conveniently provided improved visualization and understanding of the tissue microarchitecture. The reported results suggest that, by revealing complementary tissue microstructures in vivo, this multimodal microscopy can potentially facilitate a broad range of biomedical studies, such as imaging of the tumor microenvironment and neurovascular coupling.
Sun, Peng; Zhou, Haoyin; Ha, Seongmin; Hartaigh, Bríain ó; Truong, Quynh A.; Min, James K.
2016-01-01
In clinical cardiology, both anatomy and physiology are needed to diagnose cardiac pathologies. CT imaging and computer simulations provide valuable and complementary data for this purpose. However, it remains challenging to gain useful information from the large amount of high-dimensional diverse data. The current tools are not adequately integrated to visualize anatomic and physiologic data from a complete yet focused perspective. We introduce a new computer-aided diagnosis framework, which allows for comprehensive modeling and visualization of cardiac anatomy and physiology from CT imaging data and computer simulations, with a primary focus on ischemic heart disease. The following visual information is presented: (1) Anatomy from CT imaging: geometric modeling and visualization of cardiac anatomy, including four heart chambers, left and right ventricular outflow tracts, and coronary arteries; (2) Function from CT imaging: motion modeling, strain calculation, and visualization of four heart chambers; (3) Physiology from CT imaging: quantification and visualization of myocardial perfusion and contextual integration with coronary artery anatomy; (4) Physiology from computer simulation: computation and visualization of hemodynamics (e.g., coronary blood velocity, pressure, shear stress, and fluid forces on the vessel wall). Substantially, feedback from cardiologists have confirmed the practical utility of integrating these features for the purpose of computer-aided diagnosis of ischemic heart disease. PMID:26863663
Design and fabrication of vertically-integrated CMOS image sensors.
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors.
Airborne multidimensional integrated remote sensing system
NASA Astrophysics Data System (ADS)
Xu, Weiming; Wang, Jianyu; Shu, Rong; He, Zhiping; Ma, Yanhua
2006-12-01
In this paper, we present a kind of airborne multidimensional integrated remote sensing system that consists of an imaging spectrometer, a three-line scanner, a laser ranger, a position & orientation subsystem and a stabilizer PAV30. The imaging spectrometer is composed of two sets of identical push-broom high spectral imager with a field of view of 22°, which provides a field of view of 42°. The spectral range of the imaging spectrometer is from 420nm to 900nm, and its spectral resolution is 5nm. The three-line scanner is composed of two pieces of panchromatic CCD and a RGB CCD with 20° stereo angle and 10cm GSD(Ground Sample Distance) with 1000m flying height. The laser ranger can provide height data of three points every other four scanning lines of the spectral imager and those three points are calibrated to match the corresponding pixels of the spectral imager. The post-processing attitude accuracy of POS/AV 510 used as the position & orientation subsystem, which is the aerial special exterior parameters measuring product of Canadian Applanix Corporation, is 0.005° combined with base station data. The airborne multidimensional integrated remote sensing system was implemented successfully, performed the first flying experiment on April, 2005, and obtained satisfying data.
Design and Fabrication of Vertically-Integrated CMOS Image Sensors
Skorka, Orit; Joseph, Dileepan
2011-01-01
Technologies to fabricate integrated circuits (IC) with 3D structures are an emerging trend in IC design. They are based on vertical stacking of active components to form heterogeneous microsystems. Electronic image sensors will benefit from these technologies because they allow increased pixel-level data processing and device optimization. This paper covers general principles in the design of vertically-integrated (VI) CMOS image sensors that are fabricated by flip-chip bonding. These sensors are composed of a CMOS die and a photodetector die. As a specific example, the paper presents a VI-CMOS image sensor that was designed at the University of Alberta, and fabricated with the help of CMC Microsystems and Micralyne Inc. To realize prototypes, CMOS dies with logarithmic active pixels were prepared in a commercial process, and photodetector dies with metal-semiconductor-metal devices were prepared in a custom process using hydrogenated amorphous silicon. The paper also describes a digital camera that was developed to test the prototype. In this camera, scenes captured by the image sensor are read using an FPGA board, and sent in real time to a PC over USB for data processing and display. Experimental results show that the VI-CMOS prototype has a higher dynamic range and a lower dark limit than conventional electronic image sensors. PMID:22163860
Disconnected aging: cerebral white matter integrity and age-related differences in cognition.
Bennett, I J; Madden, D J
2014-09-12
Cognition arises as a result of coordinated processing among distributed brain regions and disruptions to communication within these neural networks can result in cognitive dysfunction. Cortical disconnection may thus contribute to the declines in some aspects of cognitive functioning observed in healthy aging. Diffusion tensor imaging (DTI) is ideally suited for the study of cortical disconnection as it provides indices of structural integrity within interconnected neural networks. The current review summarizes results of previous DTI aging research with the aim of identifying consistent patterns of age-related differences in white matter integrity, and of relationships between measures of white matter integrity and behavioral performance as a function of adult age. We outline a number of future directions that will broaden our current understanding of these brain-behavior relationships in aging. Specifically, future research should aim to (1) investigate multiple models of age-brain-behavior relationships; (2) determine the tract-specificity versus global effect of aging on white matter integrity; (3) assess the relative contribution of normal variation in white matter integrity versus white matter lesions to age-related differences in cognition; (4) improve the definition of specific aspects of cognitive functioning related to age-related differences in white matter integrity using information processing tasks; and (5) combine multiple imaging modalities (e.g., resting-state and task-related functional magnetic resonance imaging; fMRI) with DTI to clarify the role of cerebral white matter integrity in cognitive aging. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Disconnected Aging: Cerebral White Matter Integrity and Age-Related Differences in Cognition
Bennett, Ilana J.; Madden, David J.
2013-01-01
Cognition arises as a result of coordinated processing among distributed brain regions and disruptions to communication within these neural networks can result in cognitive dysfunction. Cortical disconnection may thus contribute to the declines in some aspects of cognitive functioning observed in healthy aging. Diffusion tensor imaging (DTI) is ideally suited for the study of cortical disconnection as it provides indices of structural integrity within interconnected neural networks. The current review summarizes results of previous DTI aging research with the aim of identifying consistent patterns of age-related differences in white matter integrity, and of relationships between measures of white matter integrity and behavioral performance as a function of adult age. We outline a number of future directions that will broaden our current understanding of these brain-behavior relationships in aging. Specifically, future research should aim to (1) investigate multiple models of age-brain-behavior relationships; (2) determine the tract-specificity versus global effect of aging on white matter integrity; (3) assess the relative contribution of normal variation in white matter integrity versus white matter lesions to age-related differences in cognition; (4) improve the definition of specific aspects of cognitive functioning related to age-related differences in white matter integrity using information processing tasks; and (5) combine multiple imaging modalities (e.g., resting-state and task-related functional magnetic resonance imaging; fMRI) with DTI to clarify the role of cerebral white matter integrity in cognitive aging. PMID:24280637
Buckler, Andrew J; Liu, Tiffany Ting; Savig, Erica; Suzek, Baris E; Ouellette, M; Danagoulian, J; Wernsing, G; Rubin, Daniel L; Paik, David
2013-08-01
A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.
ERIC Educational Resources Information Center
Liou, Wei-Kai; Bhagat, Kaushal Kumar; Chang, Chun-Yen
2018-01-01
The aim of this study is to design and implement a digital interactive globe system (DIGS), by integrating low-cost equipment to make DIGS cost-effective. DIGS includes a data processing unit, a wireless control unit, an image-capturing unit, a laser emission unit, and a three-dimensional hemispheric body-imaging screen. A quasi-experimental study…
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-01-01
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than −40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of −20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe ‘ripples’ when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm × 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 °C to 71 °C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging. PMID:20224162
Jeong, Jong Seob; Cannata, Jonathan Matthew; Shung, K Kirk
2010-04-07
It was previously demonstrated that it is feasible to simultaneously perform ultrasound therapy and imaging of a coagulated lesion during treatment with an integrated transducer that is capable of high intensity focused ultrasound (HIFU) and B-mode ultrasound imaging. It was found that coded excitation and fixed notch filtering upon reception could significantly reduce interference caused by the therapeutic transducer. During HIFU sonication, the imaging signal generated with coded excitation and fixed notch filtering had a range side-lobe level of less than -40 dB, while traditional short-pulse excitation and fixed notch filtering produced a range side-lobe level of -20 dB. The shortcoming is, however, that relatively complicated electronics may be needed to utilize coded excitation in an array imaging system. It is for this reason that in this paper an adaptive noise canceling technique is proposed to improve image quality by minimizing not only the therapeutic interference, but also the remnant side-lobe 'ripples' when using the traditional short-pulse excitation. The performance of this technique was verified through simulation and experiments using a prototype integrated HIFU/imaging transducer. Although it is known that the remnant ripples are related to the notch attenuation value of the fixed notch filter, in reality, it is difficult to find the optimal notch attenuation value due to the change in targets or the media resulted from motion or different acoustic properties even during one sonication pulse. In contrast, the proposed adaptive noise canceling technique is capable of optimally minimizing both the therapeutic interference and residual ripples without such constraints. The prototype integrated HIFU/imaging transducer is composed of three rectangular elements. The 6 MHz center element is used for imaging and the outer two identical 4 MHz elements work together to transmit the HIFU beam. Two HIFU elements of 14.4 mm x 20.0 mm dimensions could increase the temperature of the soft biological tissue from 55 degrees C to 71 degrees C within 60 s. Two types of experiments for simultaneous therapy and imaging were conducted to acquire a single scan-line and B-mode image with an aluminum plate and a slice of porcine muscle, respectively. The B-mode image was obtained using the single element imaging system during HIFU beam transmission. The experimental results proved that the combination of the traditional short-pulse excitation and the adaptive noise canceling method could significantly reduce therapeutic interference and remnant ripples and thus may be a better way to implement real-time simultaneous therapy and imaging.
Integrated circuits for volumetric ultrasound imaging with 2-D CMUT arrays.
Bhuyan, Anshuman; Choe, Jung Woo; Lee, Byung Chul; Wygant, Ira O; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T
2013-12-01
Real-time volumetric ultrasound imaging systems require transmit and receive circuitry to generate ultrasound beams and process received echo signals. The complexity of building such a system is high due to requirement of the front-end electronics needing to be very close to the transducer. A large number of elements also need to be interfaced to the back-end system and image processing of a large dataset could affect the imaging volume rate. In this work, we present a 3-D imaging system using capacitive micromachined ultrasonic transducer (CMUT) technology that addresses many of the challenges in building such a system. We demonstrate two approaches in integrating the transducer and the front-end electronics. The transducer is a 5-MHz CMUT array with an 8 mm × 8 mm aperture size. The aperture consists of 1024 elements (32 × 32) with an element pitch of 250 μm. An integrated circuit (IC) consists of a transmit beamformer and receive circuitry to improve the noise performance of the overall system. The assembly was interfaced with an FPGA and a back-end system (comprising of a data acquisition system and PC). The FPGA provided the digital I/O signals for the IC and the back-end system was used to process the received RF echo data (from the IC) and reconstruct the volume image using a phased array imaging approach. Imaging experiments were performed using wire and spring targets, a ventricle model and a human prostrate. Real-time volumetric images were captured at 5 volumes per second and are presented in this paper.
Liu, Yu; Leng, Shuai; Michalak, Gregory J; Vrieze, Thomas J; Duan, Xinhui; Qu, Mingliang; Shiung, Maria M; McCollough, Cynthia H; Fletcher, Joel G
2014-01-01
To investigate whether the integrated circuit (IC) detector results in reduced noise in computed tomography (CT) colonography (CTC). Three hundred sixty-six consecutive patients underwent clinically indicated CTC using the same CT scanner system, except for a difference in CT detectors (IC or conventional). Image noise, patient size, and scanner radiation output (volume CT dose index) were quantitatively compared between patient cohorts using each detector system, with separate comparisons for the abdomen and pelvis. For the abdomen and pelvis, despite significantly larger patient sizes in the IC detector cohort (both P < 0.001), image noise was significantly lower (both P < 0.001), whereas volume CT dose index was unchanged (both P > 0.18). Based on the observed image noise reduction, radiation dose could alternatively be reduced by approximately 20% to result in similar levels of image noise. Computed tomography colonography images acquired using the IC detector had significantly lower noise than images acquired using the conventional detector. This noise reduction can permit further radiation dose reduction in CTC.
Multi-technique hybrid imaging in PET/CT and PET/MR: what does the future hold?
de Galiza Barbosa, F; Delso, G; Ter Voert, E E G W; Huellner, M W; Herrmann, K; Veit-Haibach, P
2016-07-01
Integrated positron-emission tomography and computed tomography (PET/CT) is one of the most important imaging techniques to have emerged in oncological practice in the last decade. Hybrid imaging, in general, remains a rapidly growing field, not only in developing countries, but also in western industrialised healthcare systems. A great deal of technological development and research is focused on improving hybrid imaging technology further and introducing new techniques, e.g., integrated PET and magnetic resonance imaging (PET/MRI). Additionally, there are several new PET tracers on the horizon, which have the potential to broaden clinical applications in hybrid imaging for diagnosis as well as therapy. This article aims to highlight some of the major technical and clinical advances that are currently taking place in PET/CT and PET/MRI that will potentially maintain the position of hybrid techniques at the forefront of medical imaging technologies. Copyright © 2016 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.
Microwave-excited ultrasound and thermoacoustic dual imaging
NASA Astrophysics Data System (ADS)
Ding, Wenzheng; Ji, Zhong; Xing, Da
2017-05-01
We designed a microwave-excited ultrasound (MUI) and thermoacoustic dual imaging system. Under the pulsed microwave excitation, the piezoelectric transducer used for thermoacoustic signal detection will also emit a highly directional ultrasonic beam based on the inverse piezoelectric effect. With this beam, the ultrasonic transmitter circuitry of the traditional ultrasound imaging (TUI) system can be replaced by a microwave source. In other words, TUI can be fully integrated into the thermoacoustic imaging system by sharing the microwave excitation source and the transducer. Moreover, the signals of the two imaging modalities do not interfere with each other due to the existence of the sound path difference, so that MUI can be performed simultaneously with microwave-induced thermoacoustic imaging. In the study, the performance characteristics and imaging capabilities of this hybrid system are demonstrated. The results indicate that our design provides one easy method for low-cost platform integration and has the potential to offer a clinically useful dual-modality tool for the detection of accurate diseases.
A CMOS One-chip Wireless Camera with Digital Image Transmission Function for Capsule Endoscopes
NASA Astrophysics Data System (ADS)
Itoh, Shinya; Kawahito, Shoji; Terakawa, Susumu
This paper presents the design and implementation of a one-chip camera device for capsule endoscopes. This experimental chip integrates functional circuits required for capsule endoscopes and digital image transmission function. The integrated functional blocks include an image array, a timing generator, a clock generator, a voltage regulator, a 10b cyclic A/D converter, and a BPSK modulator. It can be operated autonomously with 3 pins (VDD, GND, and DATAOUT). A prototype image sensor chip which has 320x240 effective pixels was fabricated using 0.25μm CMOS image sensor process and the autonomous imaging was demonstrated. The chip size is 4.84mmx4.34mm. With a 2.0 V power supply, the analog part consumes 950μW and the total power consumption at 2 frames per second (fps) is 2.6mW. Error-free image transmission over a distance of 48cm at 2.5Mbps corresponding to 2fps has been succeeded with inductive coupling.
Smans, Kristien; Zoetelief, Johannes; Verbrugge, Beatrijs; Haeck, Wim; Struelens, Lara; Vanhavere, Filip; Bosmans, Hilde
2010-05-01
The purpose of this study was to compare and validate three methods to simulate radiographic image detectors with the Monte Carlo software MCNP/MCNPX in a time efficient way. The first detector model was the standard semideterministic radiography tally, which has been used in previous image simulation studies. Next to the radiography tally two alternative stochastic detector models were developed: A perfect energy integrating detector and a detector based on the energy absorbed in the detector material. Validation of three image detector models was performed by comparing calculated scatter-to-primary ratios (SPRs) with the published and experimentally acquired SPR values. For mammographic applications, SPRs computed with the radiography tally were up to 44% larger than the published results, while the SPRs computed with the perfect energy integrating detectors and the blur-free absorbed energy detector model were, on the average, 0.3% (ranging from -3% to 3%) and 0.4% (ranging from -5% to 5%) lower, respectively. For general radiography applications, the radiography tally overestimated the measured SPR by as much as 46%. The SPRs calculated with the perfect energy integrating detectors were, on the average, 4.7% (ranging from -5.3% to -4%) lower than the measured SPRs, whereas for the blur-free absorbed energy detector model, the calculated SPRs were, on the average, 1.3% (ranging from -0.1% to 2.4%) larger than the measured SPRs. For mammographic applications, both the perfect energy integrating detector model and the blur-free energy absorbing detector model can be used to simulate image detectors, whereas for conventional x-ray imaging using higher energies, the blur-free energy absorbing detector model is the most appropriate image detector model. The radiography tally overestimates the scattered part and should therefore not be used to simulate radiographic image detectors.
Fan, Yingwei; Zhang, Boyu; Chang, Wei; Zhang, Xinran; Liao, Hongen
2018-03-01
Complete resection of diseased lesions reduces the recurrence of cancer, making it critical for surgical treatment. However, precisely resecting residual tumors is a challenge during operation. A novel integrated spectral-domain optical-coherence-tomography (SD-OCT) and laser-ablation therapy system for soft-biological-tissue resection is proposed. This is a prototype optical integrated diagnosis and therapeutic system as well as an optical theranostics system. We develop an optical theranostics system, which integrates SD-OCT, a laser-ablation unit, and an automatic scanning platform. The SD-OCT image of biological tissue provides an intuitive and clear view for intraoperative diagnosis and monitoring in real time. The effect of laser ablation is analyzed using a quantitative mathematical model. The automatic endoscopic scanning platform combines an endoscopic probe and an SD-OCT sample arm to provide optical theranostic scanning motion. An optical fiber and a charge-coupled device camera are integrated into the endoscopic probe, allowing detection and coupling of the OCT-aiming beam and laser spots. The integrated diagnostic and therapeutic system combines SD-OCT imaging and laser-ablation modules with an automatic scanning platform. OCT imaging, laser-ablation treatment, and the integration and control of diagnostic and therapeutic procedures were evaluated by performing phantom experiments. Furthermore, SD-OCT-guided laser ablation provided precision laser ablation and resection for the malignant lesions in soft-biological-tissue-lesion surgery. The results demonstrated that the appropriate laser-radiation power and duration time were 10 W and 10 s, respectively. In the laser-ablation evaluation experiment, the error reached approximately 0.1 mm. Another validation experiment was performed to obtain OCT images of the pre- and post-ablated craters of ex vivo porcine brainstem. We propose an optical integrated diagnosis and therapeutic system. The primary experimental results show the high efficiency and feasibility of our theranostics system, which is promising for realizing accurate resection of tumors in vivo and in situ in the future.
Navigation integrity monitoring and obstacle detection for enhanced-vision systems
NASA Astrophysics Data System (ADS)
Korn, Bernd; Doehler, Hans-Ullrich; Hecker, Peter
2001-08-01
Typically, Enhanced Vision (EV) systems consist of two main parts, sensor vision and synthetic vision. Synthetic vision usually generates a virtual out-the-window view using databases and accurate navigation data, e. g. provided by differential GPS (DGPS). The reliability of the synthetic vision highly depends on both, the accuracy of the used database and the integrity of the navigation data. But especially in GPS based systems, the integrity of the navigation can't be guaranteed. Furthermore, only objects that are stored in the database can be displayed to the pilot. Consequently, unexpected obstacles are invisible and this might cause severe problems. Therefore, additional information has to be extracted from sensor data to overcome these problems. In particular, the sensor data analysis has to identify obstacles and has to monitor the integrity of databases and navigation. Furthermore, if a lack of integrity arises, navigation data, e.g. the relative position of runway and aircraft, has to be extracted directly from the sensor data. The main contribution of this paper is about the realization of these three sensor data analysis tasks within our EV system, which uses the HiVision 35 GHz MMW radar of EADS, Ulm as the primary EV sensor. For the integrity monitoring, objects extracted from radar images are registered with both database objects and objects (e. g. other aircrafts) transmitted via data link. This results in a classification into known and unknown radar image objects and consequently, in a validation of the integrity of database and navigation. Furthermore, special runway structures are searched for in the radar image where they should appear. The outcome of this runway check contributes to the integrity analysis, too. Concurrent to this investigation a radar image based navigation is performed without using neither precision navigation nor detailed database information to determine the aircraft's position relative to the runway. The performance of our approach is demonstrated with real data acquired during extensive flight tests to several airports in Northern Germany.
The x-ray light valve: a low-cost, digital radiographic imaging system-spatial resolution
NASA Astrophysics Data System (ADS)
MacDougall, Robert D.; Koprinarov, Ivaylo; Webster, Christie A.; Rowlands, J. A.
2007-03-01
In recent years, new x-ray radiographic systems based on large area flat panel technology have revolutionized our capability to produce digital x-ray radiographic images. However, these active matrix flat panel imagers (AMFPIs) are extraordinarily expensive compared to the systems they are replacing. Thus there is a need for a low cost digital imaging system for general applications in radiology. Different approaches have been considered to make lower cost, integrated x-ray imaging devices for digital radiography, including: scanned projection x-ray, an integrated approach based on computed radiography technology and optically demagnified x-ray screen/CCD systems. These approaches suffer from either high cost or high mechanical complexity and do not have the image quality of AMFPIs. We have identified a new approach - the X-ray Light Valve (XLV). The XLV has the potential to achieve the immediate readout in an integrated system with image quality comparable to AMFPIs. The XLV concept combines three well-established and hence lowcost technologies: an amorphous selenium (a-Se) layer to convert x-rays to image charge, a liquid crystal (LC) cell as an analog display, and an optical scanner for image digitization. Here we investigate the spatial resolution possible with XLV systems. Both a-Se and LC cells have both been shown separately to have inherently very high spatial resolution. Due to the close electrostatic coupling in the XLV, it can be expected that the spatial resolution of this system will also be very high. A prototype XLV was made and a typical office scanner was used for image digitization. The Modulation Transfer Function was measured and the limiting factor was seen to be the optical scanner. However, even with this limitation the XLV system is able to meet or exceed the resolution requirements for chest radiography.
NASA Astrophysics Data System (ADS)
Willis, Kyle V.; Srogi, LeeAnn; Lutz, Tim; Monson, Frederick C.; Pollock, Meagen
2017-12-01
Textures and compositions are critical information for interpreting rock formation. Existing methods to integrate both types of information favor high-resolution images of mineral compositions over small areas or low-resolution images of larger areas for phase identification. The method in this paper produces images of individual phases in which textural and compositional details are resolved over three orders of magnitude, from tens of micrometers to tens of millimeters. To construct these images, called Phase Composition Maps (PCMs), we make use of the resolution in backscattered electron (BSE) images and calibrate the gray scale values with mineral analyses by energy-dispersive X-ray spectrometry (EDS). The resulting images show the area of a standard thin section (roughly 40 mm × 20 mm) with spatial resolution as good as 3.5 μm/pixel, or more than 81 000 pixels/mm2, comparable to the resolution of X-ray element maps produced by wavelength-dispersive spectrometry (WDS). Procedures to create PCMs for mafic igneous rocks with multivariate linear regression models for minerals with solid solution (olivine, plagioclase feldspar, and pyroxenes) are presented and are applicable to other rock types. PCMs are processed using threshold functions based on the regression models to image specific composition ranges of minerals. PCMs are constructed using widely-available instrumentation: a scanning-electron microscope (SEM) with BSE and EDS X-ray detectors and standard image processing software such as ImageJ and Adobe Photoshop. Three brief applications illustrate the use of PCMs as petrologic tools: to reveal mineral composition patterns at multiple scales; to generate crystal size distributions for intracrystalline compositional zones and compare growth over time; and to image spatial distributions of minerals at different stages of magma crystallization by integrating textures and compositions with thermodynamic modeling.
Image fusion via nonlocal sparse K-SVD dictionary learning.
Li, Ying; Li, Fangyi; Bai, Bendu; Shen, Qiang
2016-03-01
Image fusion aims to merge two or more images captured via various sensors of the same scene to construct a more informative image by integrating their details. Generally, such integration is achieved through the manipulation of the representations of the images concerned. Sparse representation plays an important role in the effective description of images, offering a great potential in a variety of image processing tasks, including image fusion. Supported by sparse representation, in this paper, an approach for image fusion by the use of a novel dictionary learning scheme is proposed. The nonlocal self-similarity property of the images is exploited, not only at the stage of learning the underlying description dictionary but during the process of image fusion. In particular, the property of nonlocal self-similarity is combined with the traditional sparse dictionary. This results in an improved learned dictionary, hereafter referred to as the nonlocal sparse K-SVD dictionary (where K-SVD stands for the K times singular value decomposition that is commonly used in the literature), and abbreviated to NL_SK_SVD. The performance of the NL_SK_SVD dictionary is applied for image fusion using simultaneous orthogonal matching pursuit. The proposed approach is evaluated with different types of images, and compared with a number of alternative image fusion techniques. The resultant superior fused images using the present approach demonstrates the efficacy of the NL_SK_SVD dictionary in sparse image representation.
Wan, Thomas T.H.; Ma, Allen; Y.J.Lin, Blossom
2001-01-01
Abstract Purpose This study examines the integration effects on efficiency and financial viability of the top 100 integrated healthcare networks (IHNs) in the United States. Theory A contingency- strategic theory is used to identify the relationship of IHNs' performance to their structural and operational characteristics and integration strategies. Methods The lists of the top 100 IHNs ranked in two years, 1998 and 1999, by the SMG Marketing Group were merged to create a database for the study. Multiple indicators were used to examine the relationship between IHNs' characteristics and their performance in efficiency and financial viability. A path analytical model was developed and validated by the Mplus statistical program. Factors influencing the top 100 IHNs' images, represented by attaining ranking among the top 100 in two consecutive years, were analysed. Results and conclusion No positive associations were found between integration and network performance in efficiency or profits. Longitudinal data are needed to investigate the effect of integration on healthcare networks' financial performance. PMID:16896405
Yi, Faliu; Lee, Jieun; Moon, Inkyu
2014-05-01
The reconstruction of multiple depth images with a ray back-propagation algorithm in three-dimensional (3D) computational integral imaging is computationally burdensome. Further, a reconstructed depth image consists of a focus and an off-focus area. Focus areas are 3D points on the surface of an object that are located at the reconstructed depth, while off-focus areas include 3D points in free-space that do not belong to any object surface in 3D space. Generally, without being removed, the presence of an off-focus area would adversely affect the high-level analysis of a 3D object, including its classification, recognition, and tracking. Here, we use a graphics processing unit (GPU) that supports parallel processing with multiple processors to simultaneously reconstruct multiple depth images using a lookup table containing the shifted values along the x and y directions for each elemental image in a given depth range. Moreover, each 3D point on a depth image can be measured by analyzing its statistical variance with its corresponding samples, which are captured by the two-dimensional (2D) elemental images. These statistical variances can be used to classify depth image pixels as either focus or off-focus points. At this stage, the measurement of focus and off-focus points in multiple depth images is also implemented in parallel on a GPU. Our proposed method is conducted based on the assumption that there is no occlusion of the 3D object during the capture stage of the integral imaging process. Experimental results have demonstrated that this method is capable of removing off-focus points in the reconstructed depth image. The results also showed that using a GPU to remove the off-focus points could greatly improve the overall computational speed compared with using a CPU.
Lee, Jasper; Zhang, Jianguo; Park, Ryan; Dagliyan, Grant; Liu, Brent; Huang, H K
2012-07-01
A Molecular Imaging Data Grid (MIDG) was developed to address current informatics challenges in archival, sharing, search, and distribution of preclinical imaging studies between animal imaging facilities and investigator sites. This manuscript presents a 2nd generation MIDG replacing the Globus Toolkit with a new system architecture that implements the IHE XDS-i integration profile. Implementation and evaluation were conducted using a 3-site interdisciplinary test-bed at the University of Southern California. The 2nd generation MIDG design architecture replaces the initial design's Globus Toolkit with dedicated web services and XML-based messaging for dedicated management and delivery of multi-modality DICOM imaging datasets. The Cross-enterprise Document Sharing for Imaging (XDS-i) integration profile from the field of enterprise radiology informatics was adopted into the MIDG design because streamlined image registration, management, and distribution dataflow are likewise needed in preclinical imaging informatics systems as in enterprise PACS application. Implementation of the MIDG is demonstrated at the University of Southern California Molecular Imaging Center (MIC) and two other sites with specified hardware, software, and network bandwidth. Evaluation of the MIDG involves data upload, download, and fault-tolerance testing scenarios using multi-modality animal imaging datasets collected at the USC Molecular Imaging Center. The upload, download, and fault-tolerance tests of the MIDG were performed multiple times using 12 collected animal study datasets. Upload and download times demonstrated reproducibility and improved real-world performance. Fault-tolerance tests showed that automated failover between Grid Node Servers has minimal impact on normal download times. Building upon the 1st generation concepts and experiences, the 2nd generation MIDG system improves accessibility of disparate animal-model molecular imaging datasets to users outside a molecular imaging facility's LAN using a new architecture, dataflow, and dedicated DICOM-based management web services. Productivity and efficiency of preclinical research for translational sciences investigators has been further streamlined for multi-center study data registration, management, and distribution.
A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging
NASA Astrophysics Data System (ADS)
Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc
2015-06-01
High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.
On removing interpolation and resampling artifacts in rigid image registration.
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce
2013-02-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.
On Removing Interpolation and Resampling Artifacts in Rigid Image Registration
Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R.; Fischl, Bruce
2013-01-01
We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration. PMID:23076044
The study of integration about measurable image and 4D production
NASA Astrophysics Data System (ADS)
Zhang, Chunsen; Hu, Pingbo; Niu, Weiyun
2008-12-01
In this paper, we create the geospatial data of three-dimensional (3D) modeling by the combination of digital photogrammetry and digital close-range photogrammetry. For large-scale geographical background, we make the establishment of DEM and DOM combination of three-dimensional landscape model based on the digital photogrammetry which uses aerial image data to make "4D" (DOM: Digital Orthophoto Map, DEM: Digital Elevation Model, DLG: Digital Line Graphic and DRG: Digital Raster Graphic) production. For the range of building and other artificial features which the users are interested in, we realize that the real features of the three-dimensional reconstruction adopting the method of the digital close-range photogrammetry can come true on the basis of following steps : non-metric cameras for data collection, the camera calibration, feature extraction, image matching, and other steps. At last, we combine three-dimensional background and local measurements real images of these large geographic data and realize the integration of measurable real image and the 4D production.The article discussed the way of the whole flow and technology, achieved the three-dimensional reconstruction and the integration of the large-scale threedimensional landscape and the metric building.
Computerized image analysis for quantitative neuronal phenotyping in zebrafish.
Liu, Tianming; Lu, Jianfeng; Wang, Ye; Campbell, William A; Huang, Ling; Zhu, Jinmin; Xia, Weiming; Wong, Stephen T C
2006-06-15
An integrated microscope image analysis pipeline is developed for automatic analysis and quantification of phenotypes in zebrafish with altered expression of Alzheimer's disease (AD)-linked genes. We hypothesize that a slight impairment of neuronal integrity in a large number of zebrafish carrying the mutant genotype can be detected through the computerized image analysis method. Key functionalities of our zebrafish image processing pipeline include quantification of neuron loss in zebrafish embryos due to knockdown of AD-linked genes, automatic detection of defective somites, and quantitative measurement of gene expression levels in zebrafish with altered expression of AD-linked genes or treatment with a chemical compound. These quantitative measurements enable the archival of analyzed results and relevant meta-data. The structured database is organized for statistical analysis and data modeling to better understand neuronal integrity and phenotypic changes of zebrafish under different perturbations. Our results show that the computerized analysis is comparable to manual counting with equivalent accuracy and improved efficacy and consistency. Development of such an automated data analysis pipeline represents a significant step forward to achieve accurate and reproducible quantification of neuronal phenotypes in large scale or high-throughput zebrafish imaging studies.
An integrated approach to piezoactuator positioning in high-speed atomic force microscope imaging
NASA Astrophysics Data System (ADS)
Yan, Yan; Wu, Ying; Zou, Qingze; Su, Chanmin
2008-07-01
In this paper, an integrated approach to achieve high-speed atomic force microscope (AFM) imaging of large-size samples is proposed, which combines the enhanced inversion-based iterative control technique to drive the piezotube actuator control for lateral x-y axis positioning with the use of a dual-stage piezoactuator for vertical z-axis positioning. High-speed, large-size AFM imaging is challenging because in high-speed lateral scanning of the AFM imaging at large size, large positioning error of the AFM probe relative to the sample can be generated due to the adverse effects—the nonlinear hysteresis and the vibrational dynamics of the piezotube actuator. In addition, vertical precision positioning of the AFM probe is even more challenging (than the lateral scanning) because the desired trajectory (i.e., the sample topography profile) is unknown in general, and the probe positioning is also effected by and sensitive to the probe-sample interaction. The main contribution of this article is the development of an integrated approach that combines advanced control algorithm with an advanced hardware platform. The proposed approach is demonstrated in experiments by imaging a large-size (50μm ) calibration sample at high-speed (50Hz scan rate).
Trache, Andreea; Meininger, Gerald A
2005-01-01
A novel hybrid imaging system is constructed integrating atomic force microscopy (AFM) with a combination of optical imaging techniques that offer high spatial resolution. The main application of this instrument (the NanoFluor microscope) is the study of mechanotransduction with an emphasis on extracellular matrix-integrin-cytoskeletal interactions and their role in the cellular responses to changes in external chemical and mechanical factors. The AFM allows the quantitative assessment of cytoskeletal changes, binding probability, adhesion forces, and micromechanical properties of the cells, while the optical imaging applications allow thin sectioning of the cell body at the coverslip-cell interface, permitting the study of focal adhesions using total internal reflection fluorescence (TIRF) and internal reflection microscopy (IRM). Combined AFM-optical imaging experiments show that mechanical stimulation at the apical surface of cells induces a force-generating cytoskeletal response, resulting in focal contact reorganization on the basal surface that can be monitored in real time. The NanoFluor system is also equipped with a novel mechanically aligned dual camera acquisition system for synthesized Forster resonance energy transfer (FRET). The integrated NanoFluor microscope system is described, including its characteristics, applications, and limitations.
Miga, Michael I
2016-01-01
With the recent advances in computing, the opportunities to translate computational models to more integrated roles in patient treatment are expanding at an exciting rate. One area of considerable development has been directed towards correcting soft tissue deformation within image guided neurosurgery applications. This review captures the efforts that have been undertaken towards enhancing neuronavigation by the integration of soft tissue biomechanical models, imaging and sensing technologies, and algorithmic developments. In addition, the review speaks to the evolving role of modeling frameworks within surgery and concludes with some future directions beyond neurosurgical applications.
Integrated transrectal probe for translational ultrasound-photoacoustic imaging
NASA Astrophysics Data System (ADS)
Bell, Kevan L.; Harrison, Tyler; Usmani, Nawaid; Zemp, Roger J.
2016-03-01
A compact photoacoustic transrectal probe is constructed for improved imaging in brachytherapy treatment. A 192 element 5 MHz linear transducer array is mounted inside a small 3D printed casing along with an array of optical fibers. The device is fed by a pump laser and tunable NIR-optical parametric oscillator with data collected by a Verasonics ultrasound platform. This assembly demonstrates improved imaging of brachytherapy seeds in phantoms with depths up to 5 cm. The tuneable excitation in combination with standard US integration provides adjustable contrast between the brachytherapy seeds, blood filled tubes and background tissue.
Server-based Approach to Web Visualization of Integrated Three-dimensional Brain Imaging Data
Poliakov, Andrew V.; Albright, Evan; Hinshaw, Kevin P.; Corina, David P.; Ojemann, George; Martin, Richard F.; Brinkley, James F.
2005-01-01
The authors describe a client-server approach to three-dimensional (3-D) visualization of neuroimaging data, which enables researchers to visualize, manipulate, and analyze large brain imaging datasets over the Internet. All computationally intensive tasks are done by a graphics server that loads and processes image volumes and 3-D models, renders 3-D scenes, and sends the renderings back to the client. The authors discuss the system architecture and implementation and give several examples of client applications that allow visualization and analysis of integrated language map data from single and multiple patients. PMID:15561787
Electrocardiogram and Imaging: An Integrated Approach to Arrhythmogenic Cardiomyopathies.
Savino, Ketty; Bagliani, Giuseppe; Crusco, Federico; Padeletti, Margherita; Lombardi, Massimo
2018-06-01
Cardiovascular imaging has radically changed the management of patients with arrhythmogenic cardiomyopathies. This article focuses on the role of echocardiography and MRI in the diagnosis of these structural diseases. Cardiomyopathies with hypertrophic pattern (hypertrophic cardiomyopathy, restrictive cardiomyopathies, amyloidosis, Anderson-Fabry disease, and sarcoidosis), cardiomyopathies with dilated pattern, inflammatory cardiac diseases, and right ventricular arrhythmogenic cardiomyopathy are analyzed. Finally, anatomic predictors of arrhythmias and sudden cardiac death are discussed. Each paragraph is attended by clinical cases that are discussed on the electrocardiogram, after integrated with the anatomic, functional, and hemodynamic modifications of cardiovascular imaging. Copyright © 2018 Elsevier Inc. All rights reserved.
[An integrated segmentation method for 3D ultrasound carotid artery].
Yang, Xin; Wu, Huihui; Liu, Yang; Xu, Hongwei; Liang, Huageng; Cai, Wenjuan; Fang, Mengjie; Wang, Yujie
2013-07-01
An integrated segmentation method for 3D ultrasound carotid artery was proposed. 3D ultrasound image was sliced into transverse, coronal and sagittal 2D images on the carotid bifurcation point. Then, the three images were processed respectively, and the carotid artery contours and thickness were obtained finally. This paper tries to overcome the disadvantages of current computer aided diagnosis method, such as high computational complexity, easily introduced subjective errors et al. The proposed method could get the carotid artery overall information rapidly, accurately and completely. It could be transplanted into clinical usage for atherosclerosis diagnosis and prevention.
NASA Astrophysics Data System (ADS)
Smith, Edward M.; Wandtke, John; Robinson, Arvin E.
1999-07-01
The Medical Information, Communication and Archive System (MICAS) is a multi-modality integrated image management system that is seamlessly integrated with the Radiology Information System (RIS). This project was initiated in the summer of 1995 with the first phase being installed during the first half of 1997 and the second phase installed during the summer of 1998. Phase II enhancements include a permanent archive, automated workflow including modality worklist, study caches, NT diagnostic workstations with all components adhering to Digital Imaging and Communications in Medicine (DICOM) standards. This multi-vendor phased approach to PACS implementation is designed as an enterprise-wide PACS to provide images and reports throughout our healthcare network. MICAS demonstrates that aa multi-vendor open system phased approach to PACS is feasible, cost-effective, and has significant advantages over a single vendor implementation.
Nanometric holograms based on a topological insulator material.
Yue, Zengji; Xue, Gaolei; Liu, Juan; Wang, Yongtian; Gu, Min
2017-05-18
Holography has extremely extensive applications in conventional optical instruments spanning optical microscopy and imaging, three-dimensional displays and metrology. To integrate holography with modern low-dimensional electronic devices, holograms need to be thinned to a nanometric scale. However, to keep a pronounced phase shift modulation, the thickness of holograms has been generally limited to the optical wavelength scale, which hinders their integration with ultrathin electronic devices. Here, we break this limit and achieve 60 nm holograms using a topological insulator material. We discover that nanometric topological insulator thin films act as an intrinsic optical resonant cavity due to the unequal refractive indices in their metallic surfaces and bulk. The resonant cavity leads to enhancement of phase shifts and thus the holographic imaging. Our work paves a way towards integrating holography with flat electronic devices for optical imaging, data storage and information security.
O’Sullivan, Thomas D.; Heitz, Roxana T.; Parashurama, Natesh; Barkin, David B.; Wooley, Bruce A.; Gambhir, Sanjiv S.; Harris, James S.; Levi, Ofer
2013-01-01
Performance improvements in instrumentation for optical imaging have contributed greatly to molecular imaging in living subjects. In order to advance molecular imaging in freely moving, untethered subjects, we designed a miniature vertical-cavity surface-emitting laser (VCSEL)-based biosensor measuring 1cm3 and weighing 0.7g that accurately detects both fluorophore and tumor-targeted molecular probes in small animals. We integrated a critical enabling component, a complementary metal-oxide semiconductor (CMOS) read-out integrated circuit, which digitized the fluorescence signal to achieve autofluorescence-limited sensitivity. After surgical implantation of the lightweight sensor for two weeks, we obtained continuous and dynamic fluorophore measurements while the subject was un-anesthetized and mobile. The technology demonstrated here represents a critical step in the path toward untethered optical sensing using an integrated optoelectronic implant. PMID:24009996
NASA Astrophysics Data System (ADS)
Li, Jiawen; Quirk, Bryden C.; Noble, Peter B.; Kirk, Rodney W.; Sampson, David D.; McLaughlin, Robert A.
2017-10-01
Transbronchial needle aspiration (TBNA) of small lesions or lymph nodes in the lung may result in nondiagnostic tissue samples. We demonstrate the integration of an optical coherence tomography (OCT) probe into a 19-gauge flexible needle for lung tissue aspiration. This probe allows simultaneous visualization and aspiration of the tissue. By eliminating the need for insertion and withdrawal of a separate imaging probe, this integrated design minimizes the risk of dislodging the needle from the lesion prior to aspiration and may facilitate more accurate placement of the needle. Results from in situ imaging in a sheep lung show clear distinction between solid tissue and two typical constituents of nondiagnostic samples (adipose and lung parenchyma). Clinical translation of this OCT-guided aspiration needle holds promise for improving the diagnostic yield of TBNA.
ERIC Educational Resources Information Center
Familiari, Giuseppe; Relucenti, Michela; Heyn, Rosemarie; Baldini, Rossella; D'Andrea, Giancarlo; Familiari, Pietro; Bozzao, Alessandro; Raco, Antonino
2013-01-01
Neuroanatomy is considered to be one of the most difficult anatomical subjects for students. To provide motivation and improve learning outcomes in this area, clinical cases and neurosurgical images from diffusion tensor imaging (DTI) tractographies produced using an intraoperative magnetic resonance imaging apparatus (MRI/DTI) were presented and…
Fast integrated intravascular photoacoustic/ultrasound catheter
NASA Astrophysics Data System (ADS)
Choi, Changhoon; Cho, Seunghee; Kim, Taehoon; Park, Sungjo; Park, Hyoeun; Kim, Jinmoo; Lee, Seunghoon; Kang, Yeonsu; Jang, Kiyuk; Kim, Chulhong
2016-03-01
In cardiology, a vulnerable plaque is considered to be a key subject because it is strongly related to atherosclerosis and acute myocardial infarction. Because conventional intravascular imaging devices exhibit several limitations with regard to vulnerable plaque detection, the need for an effective lipid imaging modality has been continuously suggested. Photoacoustic (PA) imaging is a medical imaging technique with a high level of ultrasound (US) resolution and strong optical contrast. In this study, we successfully developed an integrated intravascular photoacoustic/ultrasound (IV-PAUS) imaging system with a catheter diameter of 1.2 mm for lipid-rich atherosclerosis imaging. An Nd:YAG pulsed laser with an excitation wavelength of 1064 nm was utilized. IV-PAUS offers 5-mm depth penetration and axial and lateral PA imaging resolutions of 94 μm and 203 μm, respectively, as determined by imaging a 6-μm carbon fiber. We initially obtained 3-dimensional (3D) co-registered PA/US images of metal stents. Subsequently, we successfully obtained 3D coregistered PA/US ex vivo images using an iliac artery from a rabbit atherosclerosis model. Accordingly, lipid-rich plaques were sufficiently differentiated from normal tissue in the ex vivo experiment. We validated these findings histologically to confirm the lipid content.
Xin, Zhaowei; Wei, Dong; Xie, Xingwang; Chen, Mingce; Zhang, Xinyu; Liao, Jing; Wang, Haiwei; Xie, Changsheng
2018-02-19
Light-field imaging is a crucial and straightforward way of measuring and analyzing surrounding light worlds. In this paper, a dual-polarized light-field imaging micro-system based on a twisted nematic liquid-crystal microlens array (TN-LCMLA) for direct three-dimensional (3D) observation is fabricated and demonstrated. The prototyped camera has been constructed by integrating a TN-LCMLA with a common CMOS sensor array. By switching the working state of the TN-LCMLA, two orthogonally polarized light-field images can be remapped through the functioned imaging sensors. The imaging micro-system in conjunction with the electric-optical microstructure can be used to perform polarization and light-field imaging, simultaneously. Compared with conventional plenoptic cameras using liquid-crystal microlens array, the polarization-independent light-field images with a high image quality can be obtained in the arbitrary polarization state selected. We experimentally demonstrate characters including a relatively wide operation range in the manipulation of incident beams and the multiple imaging modes, such as conventional two-dimensional imaging, light-field imaging, and polarization imaging. Considering the obvious features of the TN-LCMLA, such as very low power consumption, providing multiple imaging modes mentioned, simple and low-cost manufacturing, the imaging micro-system integrated with this kind of liquid-crystal microstructure driven electrically presents the potential capability of directly observing a 3D object in typical scattering media.
Singh, Anushikha; Dutta, Malay Kishore
2017-12-01
The authentication and integrity verification of medical images is a critical and growing issue for patients in e-health services. Accurate identification of medical images and patient verification is an essential requirement to prevent error in medical diagnosis. The proposed work presents an imperceptible watermarking system to address the security issue of medical fundus images for tele-ophthalmology applications and computer aided automated diagnosis of retinal diseases. In the proposed work, patient identity is embedded in fundus image in singular value decomposition domain with adaptive quantization parameter to maintain perceptual transparency for variety of fundus images like healthy fundus or disease affected image. In the proposed method insertion of watermark in fundus image does not affect the automatic image processing diagnosis of retinal objects & pathologies which ensure uncompromised computer-based diagnosis associated with fundus image. Patient ID is correctly recovered from watermarked fundus image for integrity verification of fundus image at the diagnosis centre. The proposed watermarking system is tested in a comprehensive database of fundus images and results are convincing. results indicate that proposed watermarking method is imperceptible and it does not affect computer vision based automated diagnosis of retinal diseases. Correct recovery of patient ID from watermarked fundus image makes the proposed watermarking system applicable for authentication of fundus images for computer aided diagnosis and Tele-ophthalmology applications. Copyright © 2017 Elsevier B.V. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...
On the assessment of visual communication by information theory
NASA Technical Reports Server (NTRS)
Huck, Friedrich O.; Fales, Carl L.
1993-01-01
This assessment of visual communication integrates the optical design of the image-gathering device with the digital processing for image coding and restoration. Results show that informationally optimized image gathering ordinarily can be relied upon to maximize the information efficiency of decorrelated data and the visual quality of optimally restored images.
Integrated circuit authentication using photon-limited x-ray microscopy.
Markman, Adam; Javidi, Bahram
2016-07-15
A counterfeit integrated circuit (IC) may contain subtle changes to its circuit configuration. These changes may be observed when imaged using an x-ray; however, the energy from the x-ray can potentially damage the IC. We have investigated a technique to authenticate ICs under photon-limited x-ray imaging. We modeled an x-ray image with lower energy by generating a photon-limited image from a real x-ray image using a weighted photon-counting method. We performed feature extraction on the image using the speeded-up robust features (SURF) algorithm. We then authenticated the IC by comparing the SURF features to a database of SURF features from authentic and counterfeit ICs. Our experimental results with real and counterfeit ICs using an x-ray microscope demonstrate that we can correctly authenticate an IC image captured using orders of magnitude lower energy x-rays. To the best of our knowledge, this Letter is the first one on using a photon-counting x-ray imaging model and relevant algorithms to authenticate ICs to prevent potential damage.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database.
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-28
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m.
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Li, Yan; Hu, Qingwu; Wu, Meng; Gao, Yang
2016-01-01
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. PMID:26828496
Novel instrumentation of multispectral imaging technology for detecting tissue abnormity
NASA Astrophysics Data System (ADS)
Yi, Dingrong; Kong, Linghua
2012-10-01
Multispectral imaging is becoming a powerful tool in a wide range of biological and clinical studies by adding spectral, spatial and temporal dimensions to visualize tissue abnormity and the underlying biological processes. A conventional spectral imaging system includes two physically separated major components: a band-passing selection device (such as liquid crystal tunable filter and diffraction grating) and a scientific-grade monochromatic camera, and is expensive and bulky. Recently micro-arrayed narrow-band optical mosaic filter was invented and successfully fabricated to reduce the size and cost of multispectral imaging devices in order to meet the clinical requirement for medical diagnostic imaging applications. However the challenging issue of how to integrate and place the micro filter mosaic chip to the targeting focal plane, i.e., the imaging sensor, of an off-shelf CMOS/CCD camera is not reported anywhere. This paper presents the methods and results of integrating such a miniaturized filter with off-shelf CMOS imaging sensors to produce handheld real-time multispectral imaging devices for the application of early stage pressure ulcer (ESPU) detection. Unlike conventional multispectral imaging devices which are bulky and expensive, the resulting handheld real-time multispectral ESPU detector can produce multiple images at different center wavelengths with a single shot, therefore eliminates the image registration procedure required by traditional multispectral imaging technologies.
Enhanced Line Integral Convolution with Flow Feature Detection
NASA Technical Reports Server (NTRS)
Lane, David; Okada, Arthur
1996-01-01
The Line Integral Convolution (LIC) method, which blurs white noise textures along a vector field, is an effective way to visualize overall flow patterns in a 2D domain. The method produces a flow texture image based on the input velocity field defined in the domain. Because of the nature of the algorithm, the texture image tends to be blurry. This sometimes makes it difficult to identify boundaries where flow separation and reattachments occur. We present techniques to enhance LIC texture images and use colored texture images to highlight flow separation and reattachment boundaries. Our techniques have been applied to several flow fields defined in 3D curvilinear multi-block grids and scientists have found the results to be very useful.
NASA Astrophysics Data System (ADS)
Lee, Soohyun; Lee, Changho; Cheon, Gyeongwoo; Kim, Jongmin; Jo, Dongki; Lee, Jihoon; Kang, Jin U.
2018-02-01
A commercial ophthalmic laser system (R;GEN, Lutronic Corp) was integrated with a swept-source optical coherence tomography (OCT) imaging system for real-time tissue temperature monitoring. M-scan OCT images were acquired during laser-pulse radiation, and speckle variance OCT (svOCT) images were analyzed to deduce temporal signal variations related to tissue temperature change from laser-pulse radiation. A phantom study shows that svOCT magnitude increases abruptly after laser pulse radiation and recovered exponentially, and the peak intensity of svOCT image was linearly dependent on pulse laser energy until it saturates. A study using bovine iris also showed signal variation dependence on the laser pulse radiation, and the variation was more distinctive with higher energy level.
Integration of Optical Coherence Tomography Scan Patterns to Augment Clinical Data Suite
NASA Technical Reports Server (NTRS)
Mason, S.; Patel, N.; Van Baalen, M.; Tarver, W.; Otto, C.; Samuels, B.; Koslovsky, M.; Schaefer, C.; Taiym, W.; Wear, M.;
2018-01-01
Vision changes identified in long duration spaceflight astronauts has led Space Medicine at NASA to adopt a more comprehensive clinical monitoring protocol. Optical Coherence Tomography (OCT) was recently implemented at NASA, including on board the International Space Station in 2013. NASA is collaborating with Heidelberg Engineering to increase the fidelity of the current OCT data set by integrating the traditional circumpapillary OCT image with radial and horizontal block images at the optic nerve head. The retinal nerve fiber layer was segmented by two experienced individuals. Intra-rater (N=4 subjects and 70 images) and inter-rater (N=4 subjects and 221 images) agreement was performed. The results of this analysis and the potential benefits will be presented.
Geldermann, Ina; Grouls, Christoph; Kuhl, Christiane; Deserno, Thomas M; Spreckelsen, Cord
2013-08-01
Usability aspects of different integration concepts for picture archiving and communication systems (PACS) and computer-aided diagnosis (CAD) were inquired on the example of BoneXpert, a program determining the skeletal age from a left hand's radiograph. CAD-PACS integration was assessed according to its levels: data, function, presentation, and context integration focusing on usability aspects. A user-based study design was selected. Statements of seven experienced radiologists using two alternative types of integration provided by BoneXpert were acquired and analyzed using a mixed-methods approach based on think-aloud records and a questionnaire. In both variants, the CAD module (BoneXpert) was easily integrated in the workflow, found comprehensible and fitting in the conceptual framework of the radiologists. Weak points of the software integration referred to data and context integration. Surprisingly, visualization of intermediate image processing states (presentation integration) was found less important as compared to efficient handling and fast computation. Seamlessly integrating CAD into the PACS without additional work steps or unnecessary interrupts and without visualizing intermediate images may considerably improve software performance and user acceptance with efforts in time.
NASA Astrophysics Data System (ADS)
Guldner, Ian H.; Yang, Lin; Cowdrick, Kyle R.; Wang, Qingfei; Alvarez Barrios, Wendy V.; Zellmer, Victoria R.; Zhang, Yizhe; Host, Misha; Liu, Fang; Chen, Danny Z.; Zhang, Siyuan
2016-04-01
Metastatic microenvironments are spatially and compositionally heterogeneous. This seemingly stochastic heterogeneity provides researchers great challenges in elucidating factors that determine metastatic outgrowth. Herein, we develop and implement an integrative platform that will enable researchers to obtain novel insights from intricate metastatic landscapes. Our two-segment platform begins with whole tissue clearing, staining, and imaging to globally delineate metastatic landscape heterogeneity with spatial and molecular resolution. The second segment of our platform applies our custom-developed SMART 3D (Spatial filtering-based background removal and Multi-chAnnel forest classifiers-based 3D ReconsTruction), a multi-faceted image analysis pipeline, permitting quantitative interrogation of functional implications of heterogeneous metastatic landscape constituents, from subcellular features to multicellular structures, within our large three-dimensional (3D) image datasets. Coupling whole tissue imaging of brain metastasis animal models with SMART 3D, we demonstrate the capability of our integrative pipeline to reveal and quantify volumetric and spatial aspects of brain metastasis landscapes, including diverse tumor morphology, heterogeneous proliferative indices, metastasis-associated astrogliosis, and vasculature spatial distribution. Collectively, our study demonstrates the utility of our novel integrative platform to reveal and quantify the global spatial and volumetric characteristics of the 3D metastatic landscape with unparalleled accuracy, opening new opportunities for unbiased investigation of novel biological phenomena in situ.
White matter changes and word finding failures with increasing age.
Stamatakis, Emmanuel A; Shafto, Meredith A; Williams, Guy; Tam, Phyllis; Tyler, Lorraine K
2011-01-07
Increasing life expectancy necessitates the better understanding of the neurophysiological underpinnings of age-related cognitive changes. The majority of research examining structural-cognitive relationships in aging focuses on the role of age-related changes to grey matter integrity. In the current study, we examined the relationship between age-related changes in white matter and language production. More specifically, we concentrated on word-finding failures, which increase with age. We used Diffusion tensor MRI (a technique used to image, in vivo, the diffusion of water molecules in brain tissue) to relate white matter integrity to measures of successful and unsuccessful picture naming. Diffusion tensor images were used to calculate Fractional Anisotropy (FA) images. FA is considered to be a measure of white matter organization/integrity. FA images were related to measures of successful picture naming and to word finding failures using voxel-based linear regression analyses. Successful naming rates correlated positively with white matter integrity across a broad range of regions implicated in language production. However, word finding failure rates correlated negatively with a more restricted region in the posterior aspect of superior longitudinal fasciculus. The use of DTI-MRI provides evidence for the relationship between age-related white matter changes in specific language regions and word finding failures in old age.
CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API
NASA Astrophysics Data System (ADS)
Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor
2004-05-01
Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.
NASA Astrophysics Data System (ADS)
Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric
2018-05-01
Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.
802GHz integrated horn antennas imaging array
NASA Technical Reports Server (NTRS)
Ali-Ahmad, Walid Y.; Rebeiz, Gabriel M.; Dave, Hemant; Chin, Gordon
1991-01-01
Pattern measurements at 802GHz of a single element in 256-element integrated horn imaging array are presented. The integrated-horn antenna consists of a dipole-antenna suspended on a 1-micron dielectric membrane inside a pyramidal cavity etched in silicon. The theoretical far-field patterns, calculated using reciprocity and Floquet-modes representation of the free-space field, agree well with the measured far-field patterns at 802GHz. The associated directivity for a 1.40 lambda horn aperture, calculated from the measured E and H-plane patterns is 12.3dB + or - 0.2dB. This work demonstrates that high-efficiency integrated-horn antennas are easily scalable to terahertz frequencies and could be used for radio-astronomical and plasma-diagnostic applications.
An approach to integrate the human vision psychology and perception knowledge into image enhancement
NASA Astrophysics Data System (ADS)
Wang, Hui; Huang, Xifeng; Ping, Jiang
2009-07-01
Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.
ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.
Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi
2017-08-01
With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was <1.2 ms for transforms, strings and points, and 25.2 ms for color VGA images. A separate test also demonstrated that the bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Spatial-scanning hyperspectral imaging probe for bio-imaging applications
NASA Astrophysics Data System (ADS)
Lim, Hoong-Ta; Murukeshan, Vadakke Matham
2016-03-01
The three common methods to perform hyperspectral imaging are the spatial-scanning, spectral-scanning, and snapshot methods. However, only the spectral-scanning and snapshot methods have been configured to a hyperspectral imaging probe as of today. This paper presents a spatial-scanning (pushbroom) hyperspectral imaging probe, which is realized by integrating a pushbroom hyperspectral imager with an imaging probe. The proposed hyperspectral imaging probe can also function as an endoscopic probe by integrating a custom fabricated image fiber bundle unit. The imaging probe is configured by incorporating a gradient-index lens at the end face of an image fiber bundle that consists of about 50 000 individual fiberlets. The necessary simulations, methodology, and detailed instrumentation aspects that are carried out are explained followed by assessing the developed probe's performance. Resolution test targets such as United States Air Force chart as well as bio-samples such as chicken breast tissue with blood clot are used as test samples for resolution analysis and for performance validation. This system is built on a pushbroom hyperspectral imaging system with a video camera and has the advantage of acquiring information from a large number of spectral bands with selectable region of interest. The advantages of this spatial-scanning hyperspectral imaging probe can be extended to test samples or tissues residing in regions that are difficult to access with potential diagnostic bio-imaging applications.
Medical Image Tamper Detection Based on Passive Image Authentication.
Ulutas, Guzin; Ustubioglu, Arda; Ustubioglu, Beste; V Nabiyev, Vasif; Ulutas, Mustafa
2017-12-01
Telemedicine has gained popularity in recent years. Medical images can be transferred over the Internet to enable the telediagnosis between medical staffs and to make the patient's history accessible to medical staff from anywhere. Therefore, integrity protection of the medical image is a serious concern due to the broadcast nature of the Internet. Some watermarking techniques are proposed to control the integrity of medical images. However, they require embedding of extra information (watermark) into image before transmission. It decreases visual quality of the medical image and can cause false diagnosis. The proposed method uses passive image authentication mechanism to detect the tampered regions on medical images. Structural texture information is obtained from the medical image by using local binary pattern rotation invariant (LBPROT) to make the keypoint extraction techniques more successful. Keypoints on the texture image are obtained with scale invariant feature transform (SIFT). Tampered regions are detected by the method by matching the keypoints. The method improves the keypoint-based passive image authentication mechanism (they do not detect tampering when the smooth region is used for covering an object) by using LBPROT before keypoint extraction because smooth regions also have texture information. Experimental results show that the method detects tampered regions on the medical images even if the forged image has undergone some attacks (Gaussian blurring/additive white Gaussian noise) or the forged regions are scaled/rotated before pasting.
NASA PDS IMG: Accessing Your Planetary Image Data
NASA Astrophysics Data System (ADS)
Padams, J.; Grimes, K.; Hollins, G.; Lavoie, S.; Stanboli, A.; Wagstaff, K.
2018-04-01
The Planetary Data System Cartography and Imaging Sciences Node provides a number of tools and services to integrate the 700+ TB of image data so information can be correlated across missions, instruments, and data sets and easily accessed by the science community.
NASA Astrophysics Data System (ADS)
Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
SPIDER: Next Generation Chip Scale Imaging Sensor Update
NASA Astrophysics Data System (ADS)
Duncan, A.; Kendrick, R.; Ogden, C.; Wuchenich, D.; Thurman, S.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
2016-09-01
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. This paper provides an overview of performance data on the second-generation PIC for SPIDER developed under the Defense Advanced Research Projects Agency (DARPA)'s SPIDER Zoom research funding. We also update the design description of the SPIDER Zoom imaging sensor and the second-generation PIC (high- and low resolution versions).
NASA Astrophysics Data System (ADS)
Bates, Lisa M.; Hanson, Dennis P.; Kall, Bruce A.; Meyer, Frederic B.; Robb, Richard A.
1998-06-01
An important clinical application of biomedical imaging and visualization techniques is provision of image guided neurosurgical planning and navigation techniques using interactive computer display systems in the operating room. Current systems provide interactive display of orthogonal images and 3D surface or volume renderings integrated with and guided by the location of a surgical probe. However, structures in the 'line-of-sight' path which lead to the surgical target cannot be directly visualized, presenting difficulty in obtaining full understanding of the 3D volumetric anatomic relationships necessary for effective neurosurgical navigation below the cortical surface. Complex vascular relationships and histologic boundaries like those found in artereovenous malformations (AVM's) also contribute to the difficulty in determining optimal approaches prior to actual surgical intervention. These difficulties demonstrate the need for interactive oblique imaging methods to provide 'line-of-sight' visualization. Capabilities for 'line-of- sight' interactive oblique sectioning are present in several current neurosurgical navigation systems. However, our implementation is novel, in that it utilizes a completely independent software toolkit, AVW (A Visualization Workshop) developed at the Mayo Biomedical Imaging Resource, integrated with a current neurosurgical navigation system, the COMPASS stereotactic system at Mayo Foundation. The toolkit is a comprehensive, C-callable imaging toolkit containing over 500 optimized imaging functions and structures. The powerful functionality and versatility of the AVW imaging toolkit provided facile integration and implementation of desired interactive oblique sectioning using a finite set of functions. The implementation of the AVW-based code resulted in higher-level functions for complete 'line-of-sight' visualization.
Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Optimal integration of daylighting and electric lighting systems using non-imaging optics
NASA Astrophysics Data System (ADS)
Scartezzini, J.-L.; Linhart, F.; Kaegi-Kolisnychenko, E.
2007-09-01
Electric lighting is responsible for a significant fraction of electricity consumption within non-residential buildings. Making daylight more available in office and commercial buildings can lead as a consequence to important electricity savings, as well as to the improvement of occupants' visual performance and wellbeing. Over the last decades, daylighting technologies have been developed for that purpose, some of them having proven to be highly efficient such as anidolic daylighting systems. Based on non-imaging optics these optical devices were designed to achieve an efficient collection and redistribution of daylight within deep office rooms. However in order to benefit from the substantial daylight provision obtained through these systems and convert it into effective electricity savings, novel electric lighting strategies are required. An optimal integration of high efficacy light sources and efficient luminaries based on non-imaging optics with anidolic daylighting systems can lead to such novel strategies. Starting from the experience gained through the development of an Anidolic Integrated Ceiling (AIC), this paper presents an optimal integrated daylighting and electric lighting system. Computer simulations based on ray-tracing techniques were used to achieve the integration of 36W fluorescent tubes and non-imaging reflectors with an advanced daylighting system. Lighting power densities lower than 4 W/m2 can be achieved in this way within the corresponding office room. On-site monitoring of an integrated daylighting and electric lighting system carried out on a solar experimental building confirmed the energy and visual performance of such a system: it showed that low lighting power densities can be achieved by combining an anidolic daylighting system with very efficient electric light sources and luminaries.
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
An acoustic charge transport imager for high definition television applications
NASA Technical Reports Server (NTRS)
Hunt, W. D.; Brennan, Kevin F.
1994-01-01
The primary goal of this research is to develop a solid-state high definition television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels per frame. This imager offers an order of magnitude improvement in speed over CCD designs and will allow for monolithic imagers operating from the IR to the UV. The technical approach of the project focuses on the development of the three basic components of the imager and their integration. The imager chip can be divided into three distinct components: (1) image capture via an array of avalanche photodiodes (APD's), (2) charge collection, storage and overflow control via a charge transfer transistor device (CTD), and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the development of manufacturable designs for each of these component devices. In addition to the development of each of the three distinct components, work towards their integration is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail in Sections 2-4.
Iodine 125 Imaging in Mice Using NaI(Tl)/Flat Panel PMT Integral Assembly
NASA Astrophysics Data System (ADS)
Cinti, M. N.; Majewski, S.; Williams, M. B.; Bachmann, C.; Cominelli, F.; Kundu, B. K.; Stolin, A.; Popov, V.; Welch, B. L.; De Vincentis, G.; Bennati, P.; Betti, M.; Ridolfi, S.; Pani, R.
2007-06-01
Radiolabeled agents that bind to specific receptors have shown great promise in diagnosing and characterizing tumor cell biology. In vivo imaging of gene transcription and protein expression represents an other area of interest. The radioisotope I is commercially available as a label for molecular probes and utilized by researchers in small animal studies. We propose an advanced imaging detector based on planar NaI(T1) integral assembly with a Hamamatsu Flat Panel Photomultiplier (MA-PMT) representing one of the best trade-offs between spatial resolution and detection efficiency. We characterized the imaging performances of this planar detector, in comparison with a gamma camera based on a pixellated scintillator. We also tested the in-vivo image capability by acquiring images of mice as a part of a study of inflammatory bowel disease (IBD). In this study, four 25g mice with an IBD-like phenotype (SAMP1/YitFc) were injected with 375, 125, 60 and 30 muCi of I-labelled antibody against mucosal vascular addressin cell adhesion molecule (MAdCAM-1), which is up-regulated in the presence of inflammation. Two mice without bowel inflammation were injected with 150 and 60 muCi of the labeled anti-MAdCAM-1 antibody as controls. To better evaluate the performances of the integral assembly detector, we also acquired mice images with a dual modality (X and Gamma Ray) camera dedicated for small animal imaging. The results coming from this new detector are considerable: images of SAMP1/YitFc injected with 30 muCi activity show inflammation throughout the intestinal tract, with the disease very well defined at two hours post-injection.
Vokes, David E.; Jackson, Ryan; Guo, Shuguang; Perez, Jorge A.; Su, Jianping; Ridgway, James M.; Armstrong, William B.; Chen, Zhongping; Wong, Brian J. F.
2014-01-01
Objectives Optical coherence tomography (OCT) is a new imaging modality that uses near-infrared light to produce cross-sectional images of tissue with a resolution approaching that of light microscopy. We have previously reported use of OCT imaging of the vocal folds (VFs) during direct laryngoscopy with a probe held in contact or near-contact with the VFs. This aim of this study was to develop and evaluate a novel OCT system integrated with a surgical microscope to allow hands-free OCT imaging of the VFs, which could be performed simultaneously with microscopic visualization. Methods We performed a prospective evaluation of a new method of acquiring OCT images of the VFs. Results An OCT system was successfully integrated with a surgical microscope to permit noncontact OCT imaging of the VFs of 10 patients. With this novel device we were able to identify VF epithelium and lamina propria; however, the resolution was reduced compared to that achieved with the standard contact or near-contact OCT. Conclusions Optical coherence tomography is able to produce high-resolution images of vocal fold mucosa to a maximum depth of 1.6 mm. It may be used in the diagnosis of VF lesions, particularly early squamous cell carcinoma, in which OCT can show disruption of the basement membrane. Mounting the OCT device directly onto the operating microscope allows hands-free noncontact OCT imaging and simultaneous conventional microscopic visualization of the VFs. However, the lateral resolution of the OCT microscope system is 50 µm, in contrast to the conventional handheld probe system (10 µm). Although such images at this resolution are still useful clinically, improved resolution would enhance the system’s performance, potentially enabling real-time OCT-guided microsurgery of the larynx. PMID:18700431
Imaging Total Stations - Modular and Integrated Concepts
NASA Astrophysics Data System (ADS)
Hauth, Stefan; Schlüter, Martin
2010-05-01
Keywords: 3D-Metrology, Engineering Geodesy, Digital Image Processing Initialized in 2009, the Institute for Spatial Information and Surveying Technology i3mainz, Mainz University of Applied Sciences, forces research towards modular concepts for imaging total stations. On the one hand, this research is driven by the successful setup of high precision imaging motor theodolites in the near past, on the other hand it is pushed by the actual introduction of integrated imaging total stations to the positioning market by the manufacturers Topcon and Trimble. Modular concepts for imaging total stations are manufacturer independent to a large extent and consist of a particular combination of accessory hardware, software and algorithmic procedures. The hardware part consists mainly of an interchangeable eyepiece adapter offering opportunities for digital imaging and motorized focus control. An easy assembly and disassembly in the field is possible allowing the user to switch between the classical and the imaging use of a robotic total station. The software part primarily has to ensure hardware control, but several level of algorithmic support might be added and have to be distinguished. Algorithmic procedures allow to reach several levels of calibration concerning the geometry of the external digital camera and the total station. We deliver insight in our recent developments and quality characteristics. Both the modular and the integrated approach seem to have its individual strengths and weaknesses. Therefore we expect that both approaches might point at different target applications. Our aim is a better understanding of appropriate applications for robotic imaging total stations. First results are presented. Stefan Hauth, Martin Schlüter i3mainz - Institut für Raumbezogene Informations- und Messtechnik FH Mainz University of Applied Sciences Lucy-Hillebrand-Straße 2, 55128 Mainz, Germany
PET/CT scanners: a hardware approach to image fusion.
Townsend, David W; Beyer, Thomas; Blodgett, Todd M
2003-07-01
New technology that combines positron tomography with x-ray computed tomography (PET/CT) is available from all major vendors of PET imaging equipment: CTI, Siemens, GE, Philips. Although not all vendors have made the same design choices as those described in this review all have in common that their high performance design places a commercial CT scanner in tandem with a commercial PET scanner. The level of physical integration is actually less than that of the original prototype design where the CT and PET components were mounted on the same rotating support. There will undoubtedly be a demand for PET/CT technology with a greater level of integration, and at a reduced cost. This may be achieved through the design of a scanner specifically for combined anatomical and functional imaging, rather than a design combining separate CT and PET scanners, as in the current approaches. By avoiding the duplication of data acquisition and image reconstruction functions, for example, a more integrated design should also allow cost savings over current commercial PET/CT scanners. The goal is then to design and build a device specifically for imaging the function and anatomy of cancer in the most optimal and effective way, without conceptualizing it as combined PET and CT. The development of devices specifically for imaging a particular disease (eg, cancer) differs from the conventional approach of, for example, an all-purpose anatomical imaging device such as a CT scanner. This new concept targets more of a disease management approach rather than the usual division into the medical specialties of radiology (anatomical imaging) and nuclear medicine (functional imaging). Copyright 2003 Elsevier Inc. All rights reserved.
Visualizing 3D Objects from 2D Cross Sectional Images Displayed "In-Situ" versus "Ex-Situ"
ERIC Educational Resources Information Center
Wu, Bing; Klatzky, Roberta L.; Stetten, George
2010-01-01
The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to…
[Development of a Text-Data Based Learning Tool That Integrates Image Processing and Displaying].
Shinohara, Hiroyuki; Hashimoto, Takeyuki
2015-01-01
We developed a text-data based learning tool that integrates image processing and displaying by Excel. Knowledge required for programing this tool is limited to using absolute, relative, and composite cell references and learning approximately 20 mathematical functions available in Excel. The new tool is capable of resolution translation, geometric transformation, spatial-filter processing, Radon transform, Fourier transform, convolutions, correlations, deconvolutions, wavelet transform, mutual information, and simulation of proton density-, T1-, and T2-weighted MR images. The processed images of 128 x 128 pixels or 256 x 256 pixels are observed directly within Excel worksheets without using any particular image display software. The results of image processing using this tool were compared with those using C language and the new tool was judged to have sufficient accuracy to be practically useful. The images displayed on Excel worksheets were compared with images using binary-data display software. This comparison indicated that the image quality of the Excel worksheets was nearly equal to the latter in visual impressions. Since image processing is performed by using text-data, the process is visible and facilitates making contrasts by using mathematical equations within the program. We concluded that the newly developed tool is adequate as a computer-assisted learning tool for use in medical image processing.
Fan, Jianping; Gao, Yuli; Luo, Hangzai
2008-03-01
In this paper, we have developed a new scheme for achieving multilevel annotations of large-scale images automatically. To achieve more sufficient representation of various visual properties of the images, both the global visual features and the local visual features are extracted for image content representation. To tackle the problem of huge intraconcept visual diversity, multiple types of kernels are integrated to characterize the diverse visual similarity relationships between the images more precisely, and a multiple kernel learning algorithm is developed for SVM image classifier training. To address the problem of huge interconcept visual similarity, a novel multitask learning algorithm is developed to learn the correlated classifiers for the sibling image concepts under the same parent concept and enhance their discrimination and adaptation power significantly. To tackle the problem of huge intraconcept visual diversity for the image concepts at the higher levels of the concept ontology, a novel hierarchical boosting algorithm is developed to learn their ensemble classifiers hierarchically. In order to assist users on selecting more effective hypotheses for image classifier training, we have developed a novel hyperbolic framework for large-scale image visualization and interactive hypotheses assessment. Our experiments on large-scale image collections have also obtained very positive results.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital.
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Choi, Young Hwan; Cho, Yong Kyun
2013-12-01
The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE.
XDS-I Gateway Development for HIE Connectivity with Legacy PACS at Gil Hospital
Simalango, Mikael Fernandus; Kim, Youngchul; Seo, Young Tae; Cho, Yong Kyun
2013-01-01
Objectives The ability to support healthcare document sharing is imperative in a health information exchange (HIE). Sharing imaging documents or images, however, can be challenging, especially when they are stored in a picture archiving and communication system (PACS) archive that does not support document sharing via standard HIE protocols. This research proposes a standard-compliant imaging gateway that enables connectivity between a legacy PACS and the entire HIE. Methods Investigation of the PACS solutions used at Gil Hospital was conducted. An imaging gateway application was then developed using a Java technology stack. Imaging document sharing capability enabled by the gateway was tested by integrating it into Gil Hospital's order communication system and its HIE infrastructure. Results The gateway can acquire radiology images from a PACS storage system, provide and register the images to Gil Hospital's HIE for document sharing purposes, and make the images retrievable by a cross-enterprise document sharing document viewer. Conclusions Development of an imaging gateway that mediates communication between a PACS and an HIE can be considered a viable option when the PACS does not support the standard protocol for cross-enterprise document sharing for imaging. Furthermore, the availability of common HIE standards expedites the development and integration of the imaging gateway with an HIE. PMID:24523994
Handheld ultrasound array imaging device
NASA Astrophysics Data System (ADS)
Hwang, Juin-Jet; Quistgaard, Jens
1999-06-01
A handheld ultrasound imaging device, one that weighs less than five pounds, has been developed for diagnosing trauma in the combat battlefield as well as a variety of commercial mobile diagnostic applications. This handheld device consists of four component ASICs, each is designed using the state of the art microelectronics technologies. These ASICs are integrated with a convex array transducer to allow high quality imaging of soft tissues and blood flow in real time. The device is designed to be battery driven or ac powered with built-in image storage and cineloop playback capability. Design methodologies of a handheld device are fundamentally different to those of a cart-based system. As system architecture, signal and image processing algorithm as well as image control circuit and software in this device is deigned suitably for large-scale integration, the image performance of this device is designed to be adequate to the intent applications. To elongate the battery life, low power design rules and power management circuits are incorporated in the design of each component ASIC. The performance of the prototype device is currently being evaluated for various applications such as a primary image screening tool, fetal imaging in Obstetrics, foreign object detection and wound assessment for emergency care, etc.
Full-field high-speed laser Doppler imaging system for blood-flow measurements
NASA Astrophysics Data System (ADS)
Serov, Alexandre; Lasser, Theo
2006-02-01
We describe the design and performance of a new full-field high-speed laser Doppler imaging system developed for mapping and monitoring of blood flow in biological tissue. The total imaging time for 256x256 pixels region of interest is 1.2 seconds. An integrating CMOS image sensor is utilized to detect Doppler signal in a plurality of points simultaneously on the sample illuminated by a divergent laser beam of a uniform intensity profile. The integrating property of the detector improves the signal-to-noise ratio of the measurement, which results in high-quality flow-images provided by the system. The new technique is real-time, non-invasive and the instrument is easy to use. The wide range of applications is one of the major challenges for a future application of the imager. High-resolution high-speed laser Doppler perfusion imaging is a promising optical technique for diagnostic and assessing the treatment effect of the diseases such as e.g. atherosclerosis, psoriasis, diabetes, skin cancer, allergies, peripheral vascular diseases, skin irritancy and wound healing. We present some biological applications of the new imager and discuss the perspectives for the future implementations of the imager for clinical and physiological applications.
A Framework for Integration of Heterogeneous Medical Imaging Networks
Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos
2014-01-01
Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS. PMID:25279021
A framework for integration of heterogeneous medical imaging networks.
Viana-Ferreira, Carlos; Ribeiro, Luís S; Costa, Carlos
2014-01-01
Medical imaging is increasing its importance in matters of medical diagnosis and in treatment support. Much is due to computers that have revolutionized medical imaging not only in acquisition process but also in the way it is visualized, stored, exchanged and managed. Picture Archiving and Communication Systems (PACS) is an example of how medical imaging takes advantage of computers. To solve problems of interoperability of PACS and medical imaging equipment, the Digital Imaging and Communications in Medicine (DICOM) standard was defined and widely implemented in current solutions. More recently, the need to exchange medical data between distinct institutions resulted in Integrating the Healthcare Enterprise (IHE) initiative that contains a content profile especially conceived for medical imaging exchange: Cross Enterprise Document Sharing for imaging (XDS-i). Moreover, due to application requirements, many solutions developed private networks to support their services. For instance, some applications support enhanced query and retrieve over DICOM objects metadata. This paper proposes anintegration framework to medical imaging networks that provides protocols interoperability and data federation services. It is an extensible plugin system that supports standard approaches (DICOM and XDS-I), but is also capable of supporting private protocols. The framework is being used in the Dicoogle Open Source PACS.
Analysis of autostereoscopic three-dimensional images using multiview wavelets.
Saveljev, Vladimir; Palchikova, Irina
2016-08-10
We propose that multiview wavelets can be used in processing multiview images. The reference functions for the synthesis/analysis of multiview images are described. The synthesized binary images were observed experimentally as three-dimensional visual images. The symmetric multiview B-spline wavelets are proposed. The locations recognized in the continuous wavelet transform correspond to the layout of the test objects. The proposed wavelets can be applied to the multiview, integral, and plenoptic images.
NASA Astrophysics Data System (ADS)
Forsberg, Fredrik; Roxhed, Niclas; Fischer, Andreas C.; Samel, Björn; Ericsson, Per; Hoivik, Nils; Lapadatu, Adriana; Bring, Martin; Kittilsland, Gjermund; Stemme, Göran; Niklaus, Frank
2013-09-01
Imaging in the long wavelength infrared (LWIR) range from 8 to 14 μm is an extremely useful tool for non-contact measurement and imaging of temperature in many industrial, automotive and security applications. However, the cost of the infrared (IR) imaging components has to be significantly reduced to make IR imaging a viable technology for many cost-sensitive applications. This paper demonstrates new and improved fabrication and packaging technologies for next-generation IR imaging detectors based on uncooled IR bolometer focal plane arrays. The proposed technologies include very large scale heterogeneous integration for combining high-performance, SiGe quantum-well bolometers with electronic integrated read-out circuits and CMOS compatible wafer-level vacuum packing. The fabrication and characterization of bolometers with a pitch of 25 μm × 25 μm that are arranged on read-out-wafers in arrays with 320 × 240 pixels are presented. The bolometers contain a multi-layer quantum well SiGe thermistor with a temperature coefficient of resistance of -3.0%/K. The proposed CMOS compatible wafer-level vacuum packaging technology uses Cu-Sn solid-liquid interdiffusion (SLID) bonding. The presented technologies are suitable for implementation in cost-efficient fabless business models with the potential to bring about the cost reduction needed to enable low-cost IR imaging products for industrial, security and automotive applications.
Integrated OCT-US catheter for detection of cancer in the gastrointestinal tract
NASA Astrophysics Data System (ADS)
Li, Jiawen; Ma, Teng; Cummins, Thomas; Shung, K. Kirk; Van Dam, Jacques; Zhou, Qifa; Chen, Zhongping
2015-03-01
Gastrointestinal tract cancer, the most common type of cancer, has a very low survival rate, especially for pancreatic cancer (five year survival rate of 5%) and bile duct cancer (five year survival rate of 12%). Here, we propose to use an integrated OCT-US catheter for cancer detection. OCT is targeted to acquire detailed information, such as dysplasia and neoplasia, for early detection of tumors. US is used for staging cancers according to the size of the primary tumor and whether or not it has invaded lymph nodes and other parts of the body. Considering the lumen size of the GI tract, an OCT system with a long image range (>10mm) and a US imaging system with a center frequency at 40MHz (penetration depth > 5mm) were used. The OCT probe was also designed for long-range imaging. The side-view OCT and US probes were sealed inside one probe cap piece and one torque coil and became an integrated probe. This probe was then inserted into a catheter sheath which fits in the channel of a duodenoscope and is able to be navigated smoothly into the bile duct by the elevator of the duodenoscope. We have imaged 5 healthy and 2 diseased bile ducts. In the OCT images, disorganized layer structures and heterogeneous regions demonstrated the existence of tumors. Micro-calcification can be observed in the corresponding US images.
Jia, Xun; Tian, Zhen; Xi, Yan; Jiang, Steve B; Wang, Ge
2017-01-01
Image guidance plays a critical role in radiotherapy. Currently, cone-beam computed tomography (CBCT) is routinely used in clinics for this purpose. While this modality can provide an attenuation image for therapeutic planning, low soft-tissue contrast affects the delineation of anatomical and pathological features. Efforts have recently been devoted to several MRI linear accelerator (LINAC) projects that lead to the successful combination of a full diagnostic MRI scanner with a radiotherapy machine. We present a new concept for the development of the MRI-LINAC system. Instead of combining a full MRI scanner with the LINAC platform, we propose using an interior MRI (iMRI) approach to image a specific region of interest (RoI) containing the radiation treatment target. While the conventional CBCT component still delivers a global image of the patient's anatomy, the iMRI offers local imaging of high soft-tissue contrast for tumor delineation. We describe a top-level system design for the integration of an iMRI component into an existing LINAC platform. We performed numerical analyses of the magnetic field for the iMRI to show potentially acceptable field properties in a spherical RoI with a diameter of 15 cm. This field could be shielded to a sufficiently low level around the LINAC region to avoid electromagnetic interference. Furthermore, we investigate the dosimetric impacts of this integration on the radiotherapy beam.
An acoustic charge transport imager for high definition television applications
NASA Technical Reports Server (NTRS)
Hunt, W. D.; Brennan, K. F.; Summers, C. J.
1994-01-01
The primary goal of this research is to develop a solid-state television (HDTV) imager chip operating at a frame rate of about 170 frames/sec at 2 Megapixels/frame. This imager will offer an order of magnitude improvements in speed over CCD designs and will allow for monolithic imagers operating from the IR to UV. The technical approach of the project focuses on the development of the three basic components of the imager and their subsequent integration. The camera chip can be divided into three distinct functions: (1) image capture via an array of avalanche photodiodes (APD's); (2) charge collection, storage, and overflow control via a charge transfer transistor device (CTD); and (3) charge readout via an array of acoustic charge transport (ACT) channels. The use of APD's allows for front end gain at low noise and low operating voltages while the ACT readout enables concomitant high speed and high charge transfer efficiency. Currently work is progressing towards the optimization of each of these component devices. In addition to the development of each of the three distinct components, work towards their integration and manufacturability is also progressing. The component designs are considered not only to meet individual specifications but to provide overall system level performance suitable for HDTV operation upon integration. The ultimate manufacturability and reliability of the chip constrains the design as well. The progress made during this period is described in detail.
Kim, Dae-Seung; Woo, Sang-Yoon; Yang, Hoon Joo; Huh, Kyung-Hoe; Lee, Sam-Sun; Heo, Min-Suk; Choi, Soon-Chul; Hwang, Soon Jung; Yi, Won-Jin
2014-12-01
Accurate surgical planning and transfer of the planning in orthognathic surgery are very important in achieving a successful surgical outcome with appropriate improvement. Conventionally, the paper surgery is performed based on a 2D cephalometric radiograph, and the results are expressed using cast models and an articulator. We developed an integrated orthognathic surgery system with 3D virtual planning and image-guided transfer. The maxillary surgery of orthognathic patients was planned virtually, and the planning results were transferred to the cast model by image guidance. During virtual planning, the displacement of the reference points was confirmed by the displacement from conventional paper surgery at each procedure. The results of virtual surgery were transferred to the physical cast models directly through image guidance. The root mean square (RMS) difference between virtual surgery and conventional model surgery was 0.75 ± 0.51 mm for 12 patients. The RMS difference between virtual surgery and image-guidance results was 0.78 ± 0.52 mm, which showed no significant difference from the difference of conventional model surgery. The image-guided orthognathic surgery system integrated with virtual planning will replace physical model surgical planning and enable transfer of the virtual planning directly without the need for an intermediate splint. Copyright © 2014 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Relative Pose Estimation Using Image Feature Triplets
NASA Astrophysics Data System (ADS)
Chuang, T. Y.; Rottensteiner, F.; Heipke, C.
2015-03-01
A fully automated reconstruction of the trajectory of image sequences using point correspondences is turning into a routine practice. However, there are cases in which point features are hardly detectable, cannot be localized in a stable distribution, and consequently lead to an insufficient pose estimation. This paper presents a triplet-wise scheme for calibrated relative pose estimation from image point and line triplets, and investigates the effectiveness of the feature integration upon the relative pose estimation. To this end, we employ an existing point matching technique and propose a method for line triplet matching in which the relative poses are resolved during the matching procedure. The line matching method aims at establishing hypotheses about potential minimal line matches that can be used for determining the parameters of relative orientation (pose estimation) of two images with respect to the reference one; then, quantifying the agreement using the estimated orientation parameters. Rather than randomly choosing the line candidates in the matching process, we generate an associated lookup table to guide the selection of potential line matches. In addition, we integrate the homologous point and line triplets into a common adjustment procedure. In order to be able to also work with image sequences the adjustment is formulated in an incremental manner. The proposed scheme is evaluated with both synthetic and real datasets, demonstrating its satisfactory performance and revealing the effectiveness of image feature integration.
Hot topics in biomedical ultrasound: ultrasound therapy and its integration with ultrasonic imaging
NASA Astrophysics Data System (ADS)
Everbach, E. Carr
2005-09-01
Since the development of biomedical ultrasound imaging from sonar after WWII, there has been a clear divide between ultrasonic imaging and ultrasound therapy. While imaging techniques are designed to cause as little change as possible in the tissues through which ultrasound propagates, ultrasound therapy typically relies upon heating or acoustic cavitation to produce a desirable therapeutic effect. Concerns over the increasingly high acoustic outputs of diagnostic ultrasound scanners prompted the adoption of the Mechanical Index (MI) and Thermal Index (TI) in the early 1990s. Therapeutic applications of ultrasound, meanwhile, have evolved from deep tissue heating in sports medicine to include targeted drug delivery, tumor and plaque ablation, cauterization via high intensity focused ultrasound (HIFU), and accelerated dissolution of blood clots. The integration of ultrasonic imaging and therapy in one device is just beginning, but the promise of improved patient outcomes is balanced by regulatory and practical impediments.
An integrated single- and two-photon non-diffracting light-sheet microscope
NASA Astrophysics Data System (ADS)
Lau, Sze Cheung; Chiu, Hoi Chun; Zhao, Luwei; Zhao, Teng; Loy, M. M. T.; Du, Shengwang
2018-04-01
We describe a fluorescence optical microscope with both single-photon and two-photon non-diffracting light-sheet excitations for large volume imaging. With a special design to accommodate two different wavelength ranges (visible: 400-700 nm and near infrared: 800-1200 nm), we combine the line-Bessel sheet (LBS, for single-photon excitation) and the scanning Bessel beam (SBB, for two-photon excitation) light sheet together in a single microscope setup. For a transparent thin sample where the scattering can be ignored, the LBS single-photon excitation is the optimal imaging solution. When the light scattering becomes significant for a deep-cell or deep-tissue imaging, we use SBB light-sheet two-photon excitation with a longer wavelength. We achieved nearly identical lateral/axial resolution of about 350/270 nm for both imagings. This integrated light-sheet microscope may have a wide application for live-cell and live-tissue three-dimensional high-speed imaging.
Initial Investigation of preclinical integrated SPECT and MR imaging.
Hamamura, Mark J; Ha, Seunghoon; Roeck, Werner W; Wagenaar, Douglas J; Meier, Dirk; Patt, Bradley E; Nalcioglu, Orhan
2010-02-01
Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source.
Initial Investigation of Preclinical Integrated SPECT and MR Imaging
Hamamura, Mark J.; Ha, Seunghoon; Roeck, Werner W.; Wagenaar, Douglas J.; Meier, Dirk; Patt, Bradley E.; Nalcioglu, Orhan
2014-01-01
Single-photon emission computed tomography (SPECT) can provide specific functional information while magnetic resonance imaging (MRI) can provide high-spatial resolution anatomical information as well as complementary functional information. In this study, we utilized a dual modality SPECT/MRI (MRSPECT) system to investigate the integration of SPECT and MRI for improved image accuracy. The MRSPECT system consisted of a cadmium-zinc-telluride (CZT) nuclear radiation detector interfaced with a specialized radiofrequency (RF) coil that was placed within a whole-body 4 T MRI system. The importance of proper corrections for non-uniform detector sensitivity and Lorentz force effects was demonstrated. MRI data were utilized for attenuation correction (AC) of the nuclear projection data and optimized Wiener filtering of the SPECT reconstruction for improved image accuracy. Finally, simultaneous dual-imaging of a nude mouse was performed to demonstrated the utility of co-registration for accurate localization of a radioactive source. PMID:20082527
ATAC-see reveals the accessible genome by transposase-mediated imaging and sequencing.
Chen, Xingqi; Shen, Ying; Draper, Will; Buenrostro, Jason D; Litzenburger, Ulrike; Cho, Seung Woo; Satpathy, Ansuman T; Carter, Ava C; Ghosh, Rajarshi P; East-Seletsky, Alexandra; Doudna, Jennifer A; Greenleaf, William J; Liphardt, Jan T; Chang, Howard Y
2016-12-01
Spatial organization of the genome plays a central role in gene expression, DNA replication, and repair. But current epigenomic approaches largely map DNA regulatory elements outside of the native context of the nucleus. Here we report assay of transposase-accessible chromatin with visualization (ATAC-see), a transposase-mediated imaging technology that employs direct imaging of the accessible genome in situ, cell sorting, and deep sequencing to reveal the identity of the imaged elements. ATAC-see revealed the cell-type-specific spatial organization of the accessible genome and the coordinated process of neutrophil chromatin extrusion, termed NETosis. Integration of ATAC-see with flow cytometry enables automated quantitation and prospective cell isolation as a function of chromatin accessibility, and it reveals a cell-cycle dependence of chromatin accessibility that is especially dynamic in G1 phase. The integration of imaging and epigenomics provides a general and scalable approach for deciphering the spatiotemporal architecture of gene control.
Character feature integration of Chinese calligraphy and font
NASA Astrophysics Data System (ADS)
Shi, Cao; Xiao, Jianguo; Jia, Wenhua; Xu, Canhui
2013-01-01
A framework is proposed in this paper to effectively generate a new hybrid character type by means of integrating local contour feature of Chinese calligraphy with structural feature of font in computer system. To explore traditional art manifestation of calligraphy, multi-directional spatial filter is applied for local contour feature extraction. Then the contour of character image is divided into sub-images. The sub-images in the identical position from various characters are estimated by Gaussian distribution. According to its probability distribution, the dilation operator and erosion operator are designed to adjust the boundary of font image. And then new Chinese character images are generated which possess both contour feature of artistical calligraphy and elaborate structural feature of font. Experimental results demonstrate the new characters are visually acceptable, and the proposed framework is an effective and efficient strategy to automatically generate the new hybrid character of calligraphy and font.
eHXI: A permanently installed, hard x-ray imager for the National Ignition Facility
Doppner, T.; Bachmann, B.; Albert, F.; ...
2016-06-14
We have designed and built a multi-pinhole imaging system for high energy x-rays (≥ 50 keV) that is permanently installed in the equatorial plane outside of the target chamber at the National Ignition Facility (NIF). It records absolutely-calibrated, time-integrated x-ray images with the same line-of-sight as the multi-channel, spatially integrating hard x-ray detector FFLEX [McDonald et al., Rev. Sci. Instrum. 75 (2004) 3753], having a side view of indirect-drive inertial confinement fusion (ICF) implosion targets. The equatorial hard x-ray imager (eHXI) has recorded images on the majority of ICF implosion experiments since May 2011. Lastly, eHXI provides valuable information onmore » hot electron distribution in hohlraum experiments, target alignment, potential hohlraum drive asymmetries and serves as a long term reference for the FFLEX diagnostics.« less