Automated synthesis of image processing procedures using AI planning techniques
NASA Technical Reports Server (NTRS)
Chien, Steve; Mortensen, Helen
1994-01-01
This paper describes the Multimission VICAR (Video Image Communication and Retrieval) Planner (MVP) (Chien 1994) system, which uses artificial intelligence planning techniques (Iwasaki & Friedland, 1985, Pemberthy & Weld, 1992, Stefik, 1981) to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing subprograms) in response to image processing requests made to the JPL Multimission Image Processing Laboratory (MIPL). The MVP system allows the user to specify the image processing requirements in terms of the various types of correction required. Given this information, MVP derives unspecified required processing steps and determines appropriate image processing programs and parameters to achieve the specified image processing goals. This information is output as an executable image processing program which can then be executed to fill the processing request.
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
ERIC Educational Resources Information Center
Ricco, Robert B.; Overton, Willis F.
2011-01-01
Many current psychological models of reasoning minimize the role of deductive processes in human thought. In the present paper, we argue that deduction is an important part of ordinary cognition and we propose that a dual systems Competence [image omitted] Procedural processing model conceptualized within relational developmental systems theory…
Acquisition and Post-Processing of Immunohistochemical Images.
Sedgewick, Jerry
2017-01-01
Augmentation of digital images is almost always a necessity in order to obtain a reproduction that matches the appearance of the original. However, that augmentation can mislead if it is done incorrectly and not within reasonable limits. When procedures are in place for insuring that originals are archived, and image manipulation steps reported, scientists not only follow good laboratory practices, but avoid ethical issues associated with post processing, and protect their labs from any future allegations of scientific misconduct. Also, when procedures are in place for correct acquisition of images, the extent of post processing is minimized or eliminated. These procedures include white balancing (for brightfield images), keeping tonal values within the dynamic range of the detector, frame averaging to eliminate noise (typically in fluorescence imaging), use of the highest bit depth when a choice is available, flatfield correction, and archiving of the image in a non-lossy format (not JPEG).When post-processing is necessary, the commonly used applications for correction include Photoshop, and ImageJ, but a free program (GIMP) can also be used. Corrections to images include scaling the bit depth to higher and lower ranges, removing color casts from brightfield images, setting brightness and contrast, reducing color noise, reducing "grainy" noise, conversion of pure colors to grayscale, conversion of grayscale to colors typically used in fluorescence imaging, correction of uneven illumination (flatfield correction), merging color images (fluorescence), and extending the depth of focus. These corrections are explained in step-by-step procedures in the chapter that follows.
NASA Astrophysics Data System (ADS)
Hsu, Chih-Yu; Huang, Hsuan-Yu; Lee, Lin-Tsang
2010-12-01
The paper propose a new procedure including four stages in order to preserve the desired edges during the image processing of noise reduction. A denoised image can be obtained from a noisy image at the first stage of the procedure. At the second stage, an edge map can be obtained by the Canny edge detector to find the edges of the object contours. Manual modification of an edge map at the third stage is optional to capture all the desired edges of the object contours. At the final stage, a new method called Edge Preserved Inhomogeneous Diffusion Equation (EPIDE) is used to smooth the noisy images or the previously denoised image at the first stage for achieving the edge preservation. The Optical Character Recognition (OCR) results in the experiments show that the proposed procedure has the best recognition result because of the capability of edge preservation.
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
This paper describes work on the Multimission VICAR Planner (MVP) system to automatically construct executable image processing procedures for custom image processing requests for the JPL Multimission Image Processing Lab (MIPL). This paper focuses on two issues. First, large search spaces caused by complex plans required the use of hand encoded control information. In order to address this in a manner similar to that used by human experts, MVP uses a decomposition-based planner to implement hierarchical/skeletal planning at the higher level and then uses a classical operator based planner to solve subproblems in contexts defined by the high-level decomposition.
Evaluation of noise limits to improve image processing in soft X-ray projection microscopy.
Jamsranjav, Erdenetogtokh; Kuge, Kenichi; Ito, Atsushi; Kinjo, Yasuhito; Shiina, Tatsuo
2017-03-03
Soft X-ray microscopy has been developed for high resolution imaging of hydrated biological specimens due to the availability of water window region. In particular, a projection type microscopy has advantages in wide viewing area, easy zooming function and easy extensibility to computed tomography (CT). The blur of projection image due to the Fresnel diffraction of X-rays, which eventually reduces spatial resolution, could be corrected by an iteration procedure, i.e., repetition of Fresnel and inverse Fresnel transformations. However, it was found that the correction is not enough to be effective for all images, especially for images with low contrast. In order to improve the effectiveness of image correction by computer processing, we in this study evaluated the influence of background noise in the iteration procedure through a simulation study. In the study, images of model specimen with known morphology were used as a substitute for the chromosome images, one of the targets of our microscope. Under the condition that artificial noise was distributed on the images randomly, we introduced two different parameters to evaluate noise effects according to each situation where the iteration procedure was not successful, and proposed an upper limit of the noise within which the effective iteration procedure for the chromosome images was possible. The study indicated that applying the new simulation and noise evaluation method was useful for image processing where background noises cannot be ignored compared with specimen images.
NASA Technical Reports Server (NTRS)
Haralick, R. H. (Principal Investigator); Bosley, R. J.
1974-01-01
The author has identified the following significant results. A procedure was developed to extract cross-band textural features from ERTS MSS imagery. Evolving from a single image texture extraction procedure which uses spatial dependence matrices to measure relative co-occurrence of nearest neighbor grey tones, the cross-band texture procedure uses the distribution of neighboring grey tone N-tuple differences to measure the spatial interrelationships, or co-occurrences, of the grey tone N-tuples present in a texture pattern. In both procedures, texture is characterized in such a way as to be invariant under linear grey tone transformations. However, the cross-band procedure complements the single image procedure by extracting texture information and spectral information contained in ERTS multi-images. Classification experiments show that when used alone, without spectral processing, the cross-band texture procedure extracts more information than the single image texture analysis. Results show an improvement in average correct classification from 86.2% to 88.8% for ERTS image no. 1021-16333 with the cross-band texture procedure. However, when used together with spectral features, the single image texture plus spectral features perform better than the cross-band texture plus spectral features, with an average correct classification of 93.8% and 91.6%, respectively.
USDA-ARS?s Scientific Manuscript database
Using five centimeter resolution images acquired with an unmanned aircraft system (UAS), we developed and evaluated an image processing workflow that included the integration of resolution-appropriate field sampling, feature selection, object-based image analysis, and processing approaches for UAS i...
Digital image processing and analysis for activated sludge wastewater treatment.
Khan, Muhammad Burhan; Lee, Xue Yong; Nisar, Humaira; Ng, Choon Aun; Yeap, Kim Ho; Malik, Aamir Saeed
2015-01-01
Activated sludge system is generally used in wastewater treatment plants for processing domestic influent. Conventionally the activated sludge wastewater treatment is monitored by measuring physico-chemical parameters like total suspended solids (TSSol), sludge volume index (SVI) and chemical oxygen demand (COD) etc. For the measurement, tests are conducted in the laboratory, which take many hours to give the final measurement. Digital image processing and analysis offers a better alternative not only to monitor and characterize the current state of activated sludge but also to predict the future state. The characterization by image processing and analysis is done by correlating the time evolution of parameters extracted by image analysis of floc and filaments with the physico-chemical parameters. This chapter briefly reviews the activated sludge wastewater treatment; and, procedures of image acquisition, preprocessing, segmentation and analysis in the specific context of activated sludge wastewater treatment. In the latter part additional procedures like z-stacking, image stitching are introduced for wastewater image preprocessing, which are not previously used in the context of activated sludge. Different preprocessing and segmentation techniques are proposed, along with the survey of imaging procedures reported in the literature. Finally the image analysis based morphological parameters and correlation of the parameters with regard to monitoring and prediction of activated sludge are discussed. Hence it is observed that image analysis can play a very useful role in the monitoring of activated sludge wastewater treatment plants.
Chromaticity based smoke removal in endoscopic images
NASA Astrophysics Data System (ADS)
Tchaka, Kevin; Pawar, Vijay M.; Stoyanov, Danail
2017-02-01
In minimally invasive surgery, image quality is a critical pre-requisite to ensure a surgeons ability to perform a procedure. In endoscopic procedures, image quality can deteriorate for a number of reasons such as fogging due to the temperature gradient after intra-corporeal insertion, lack of focus and due to smoke generated when using electro-cautery to dissect tissues without bleeding. In this paper we investigate the use of vision processing techniques to remove surgical smoke and improve the clarity of the image. We model the image formation process by introducing a haze medium to account for the degradation of visibility. For simplicity and computational efficiency we use an adapted dark-channel prior method combined with histogram equalization to remove smoke artifacts to recover the radiance image and enhance the contrast and brightness of the final result. Our initial results on images from robotic assisted procedures are promising and show that the proposed approach may be used to enhance image quality during surgery without additional suction devices. In addition, the processing pipeline may be used as an important part of a robust surgical vision pipeline that can continue working in the presence of smoke.
Terahertz reflection imaging using Kirchhoff migration.
Dorney, T D; Johnson, J L; Van Rudd, J; Baraniuk, R G; Symes, W W; Mittleman, D M
2001-10-01
We describe a new imaging method that uses single-cycle pulses of terahertz (THz) radiation. This technique emulates data-collection and image-processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a simple migration procedure to solve the inverse problem; this permits us to reconstruct the location and shape of targets. These results demonstrate the feasibility of the THz system as a test-bed for the exploration of new seismic processing methods involving complex model systems.
Towards an Intelligent Planning Knowledge Base Development Environment
NASA Technical Reports Server (NTRS)
Chien, S.
1994-01-01
ract describes work in developing knowledge base editing and debugging tools for the Multimission VICAR Planner (MVP) system. MVP uses artificial intelligence planning techniques to automatically construct executable complex image processing procedures (using models of the smaller constituent image processing requests made to the JPL Multimission Image Processing Laboratory.
SU-D-209-03: Radiation Dose Reduction Using Real-Time Image Processing in Interventional Radiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanal, K; Moirano, J; Zamora, D
Purpose: To characterize changes in radiation dose after introducing a new real-time image processing technology in interventional radiology systems. Methods: Interventional radiology (IR) procedures are increasingly complex, at times requiring substantial time and radiation dose. The risk of inducing tissue reactions as well as long-term stochastic effects such as radiation-induced cancer is not trivial. To reduce this risk, IR systems are increasingly equipped with dose reduction technologies.Recently, ClarityIQ (Philips Healthcare) technology was installed in our existing neuroradiology IR (NIR) and vascular IR (VIR) suites respectively. ClarityIQ includes real-time image processing that reduces noise/artifacts, enhances images, and sharpens edges while alsomore » reducing radiation dose rates. We reviewed 412 NIR (175 pre- and 237 post-ClarityIQ) procedures and 329 VIR (156 preand 173 post-ClarityIQ) procedures performed at our institution pre- and post-ClarityIQ implementation. NIR procedures were primarily classified as interventional or diagnostic. VIR procedures included drain port, drain placement, tube change, mesenteric, and implanted venous procedures. Air Kerma (AK in units of mGy) was documented for all the cases using a commercial radiation exposure management system. Results: When considering all NIR procedures, median AK decreased from 1194 mGy to 561 mGy. When considering all VIR procedures, median AK decreased from 49 to 14 mGy. Both NIR and VIR exhibited a decrease in AK exceeding 50% after ClarityIQ implementation, a statistically significant (p<0.05) difference. Of the 5 most common VIR procedures, all median AK values decreased, but significance (p<0.05) was only reached in venous access (N=53), angio mesenteric (N=41), and drain placement procedures (N=31). Conclusion: ClarityIQ can reduce dose significantly for both NIR and VIR procedures. Image quality was not assessed in conjunction with the dose reduction.« less
Slice-thickness evaluation in CT and MRI: an alternative computerised procedure.
Acri, G; Tripepi, M G; Causa, F; Testagrossa, B; Novario, R; Vermiglio, G
2012-04-01
The efficient use of computed tomography (CT) and magnetic resonance imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice thickness (ST) requires scan exploration of phantoms containing test objects (plane, cone or spiral). To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination of full width at half maximum (FWHM) in real time. The phantom consists of a polymethyl methacrylate (PMMA) box, diagonally crossed by a PMMA septum dividing the box into two sections. The phantom images were acquired and processed using the LabView-based procedure. The LabView (LV) results were compared with those obtained by processing the same phantom images with commercial software, and the Fisher exact test (F test) was conducted on the resulting data sets to validate the proposed methodology. In all cases, there was no statistically significant variation between the two different procedures and the LV procedure, which can therefore be proposed as a valuable alternative to other commonly used procedures and be reliably used on any CT and MRI scanner.
Automatic digital surface model (DSM) generation from aerial imagery data
NASA Astrophysics Data System (ADS)
Zhou, Nan; Cao, Shixiang; He, Hongyan; Xing, Kun; Yue, Chunyu
2018-04-01
Aerial sensors are widely used to acquire imagery for photogrammetric and remote sensing application. In general, the images have large overlapped region, which provide a lot of redundant geometry and radiation information for matching. This paper presents a POS supported dense matching procedure for automatic DSM generation from aerial imagery data. The method uses a coarse-to-fine hierarchical strategy with an effective combination of several image matching algorithms: image radiation pre-processing, image pyramid generation, feature point extraction and grid point generation, multi-image geometrically constraint cross-correlation (MIG3C), global relaxation optimization, multi-image geometrically constrained least squares matching (MIGCLSM), TIN generation and point cloud filtering. The image radiation pre-processing is used in order to reduce the effects of the inherent radiometric problems and optimize the images. The presented approach essentially consists of 3 components: feature point extraction and matching procedure, grid point matching procedure and relational matching procedure. The MIGCLSM method is used to achieve potentially sub-pixel accuracy matches and identify some inaccurate and possibly false matches. The feasibility of the method has been tested on different aerial scale images with different landcover types. The accuracy evaluation is based on the comparison between the automatic extracted DSMs derived from the precise exterior orientation parameters (EOPs) and the POS.
Improving Performance During Image-Guided Procedures
Duncan, James R.; Tabriz, David
2015-01-01
Objective Image-guided procedures have become a mainstay of modern health care. This article reviews how human operators process imaging data and use it to plan procedures and make intraprocedural decisions. Methods A series of models from human factors research, communication theory, and organizational learning were applied to the human-machine interface that occupies the center stage during image-guided procedures. Results Together, these models suggest several opportunities for improving performance as follows: 1. Performance will depend not only on the operator’s skill but also on the knowledge embedded in the imaging technology, available tools, and existing protocols. 2. Voluntary movements consist of planning and execution phases. Performance subscores should be developed that assess quality and efficiency during each phase. For procedures involving ionizing radiation (fluoroscopy and computed tomography), radiation metrics can be used to assess performance. 3. At a basic level, these procedures consist of advancing a tool to a specific location within a patient and using the tool. Paradigms from mapping and navigation should be applied to image-guided procedures. 4. Recording the content of the imaging system allows one to reconstruct the stimulus/response cycles that occur during image-guided procedures. Conclusions When compared with traditional “open” procedures, the technology used during image-guided procedures places an imaging system and long thin tools between the operator and the patient. Taking a step back and reexamining how information flows through an imaging system and how actions are conveyed through human-machine interfaces suggest that much can be learned from studying system failures. In the same way that flight data recorders revolutionized accident investigations in aviation, much could be learned from recording video data during image-guided procedures. PMID:24921628
Tracking transcriptional activities with high-content epifluorescent imaging
NASA Astrophysics Data System (ADS)
Hua, Jianping; Sima, Chao; Cypert, Milana; Gooden, Gerald C.; Shack, Sonsoles; Alla, Lalitamba; Smith, Edward A.; Trent, Jeffrey M.; Dougherty, Edward R.; Bittner, Michael L.
2012-04-01
High-content cell imaging based on fluorescent protein reporters has recently been used to track the transcriptional activities of multiple genes under different external stimuli for extended periods. This technology enhances our ability to discover treatment-induced regulatory mechanisms, temporally order their onsets and recognize their relationships. To fully realize these possibilities and explore their potential in biological and pharmaceutical applications, we introduce a new data processing procedure to extract information about the dynamics of cell processes based on this technology. The proposed procedure contains two parts: (1) image processing, where the fluorescent images are processed to identify individual cells and allow their transcriptional activity levels to be quantified; and (2) data representation, where the extracted time course data are summarized and represented in a way that facilitates efficient evaluation. Experiments show that the proposed procedure achieves fast and robust image segmentation with sufficient accuracy. The extracted cellular dynamics are highly reproducible and sensitive enough to detect subtle activity differences and identify mechanisms responding to selected perturbations. This method should be able to help biologists identify the alterations of cellular mechanisms that allow drug candidates to change cell behavior and thereby improve the efficiency of drug discovery and treatment design.
Terahertz multistatic reflection imaging.
Dorney, Timothy D; Symes, William W; Baraniuk, Richard G; Mittleman, Daniel M
2002-07-01
We describe a new imaging method using single-cycle pulses of terahertz (THz) radiation. This technique emulates the data collection and image processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a migration procedure to solve the inverse problem; this permits us to reconstruct the location, the shape, and the refractive index of targets. We show examples for both metallic and dielectric model targets, and we perform velocity analysis on dielectric targets to estimate the refractive indices of imaged components. These results broaden the capabilities of THz imaging systems and also demonstrate the viability of the THz system as a test bed for the exploration of new seismic processing methods.
An automatic agricultural zone classification procedure for crop inventory satellite images
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Kux, H. J.; Velasco, F. R. D.; Deoliveira, M. O. B.
1982-01-01
A classification procedure for assessing crop areal proportion in multispectral scanner image is discussed. The procedure is into four parts: labeling; classification; proportion estimation; and evaluation. The procedure also has the following characteristics: multitemporal classification; the need for a minimum field information; and verification capability between automatic classification and analyst labeling. The processing steps and the main algorithms involved are discussed. An outlook on the future of this technology is also presented.
Bio-inspired approach to multistage image processing
NASA Astrophysics Data System (ADS)
Timchenko, Leonid I.; Pavlov, Sergii V.; Kokryatskaya, Natalia I.; Poplavska, Anna A.; Kobylyanska, Iryna M.; Burdenyuk, Iryna I.; Wójcik, Waldemar; Uvaysova, Svetlana; Orazbekov, Zhassulan; Kashaganova, Gulzhan
2017-08-01
Multistage integration of visual information in the brain allows people to respond quickly to most significant stimuli while preserving the ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing, described in this paper, comprises main types of cortical multistage convergence. One of these types occurs within each visual pathway and the other between the pathways. This approach maps input images into a flexible hierarchy which reflects the complexity of the image data. The procedures of temporal image decomposition and hierarchy formation are described in mathematical terms. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image which encapsulates, in a computer manner, structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a very quick response from the system. The result is represented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match.
Semi-automated Image Processing for Preclinical Bioluminescent Imaging.
Slavine, Nikolai V; McColl, Roderick W
Bioluminescent imaging is a valuable noninvasive technique for investigating tumor dynamics and specific biological molecular events in living animals to better understand the effects of human disease in animal models. The purpose of this study was to develop and test a strategy behind automated methods for bioluminescence image processing from the data acquisition to obtaining 3D images. In order to optimize this procedure a semi-automated image processing approach with multi-modality image handling environment was developed. To identify a bioluminescent source location and strength we used the light flux detected on the surface of the imaged object by CCD cameras. For phantom calibration tests and object surface reconstruction we used MLEM algorithm. For internal bioluminescent sources we used the diffusion approximation with balancing the internal and external intensities on the boundary of the media and then determined an initial order approximation for the photon fluence we subsequently applied a novel iterative deconvolution method to obtain the final reconstruction result. We find that the reconstruction techniques successfully used the depth-dependent light transport approach and semi-automated image processing to provide a realistic 3D model of the lung tumor. Our image processing software can optimize and decrease the time of the volumetric imaging and quantitative assessment. The data obtained from light phantom and lung mouse tumor images demonstrate the utility of the image reconstruction algorithms and semi-automated approach for bioluminescent image processing procedure. We suggest that the developed image processing approach can be applied to preclinical imaging studies: characteristics of tumor growth, identify metastases, and potentially determine the effectiveness of cancer treatment.
Amplitude image processing by diffractive optics.
Cagigal, Manuel P; Valle, Pedro J; Canales, V F
2016-02-22
In contrast to the standard digital image processing, which operates over the detected image intensity, we propose to perform amplitude image processing. Amplitude processing, like low pass or high pass filtering, is carried out using diffractive optics elements (DOE) since it allows to operate over the field complex amplitude before it has been detected. We show the procedure for designing the DOE that corresponds to each operation. Furthermore, we accomplish an analysis of amplitude image processing performances. In particular, a DOE Laplacian filter is applied to simulated astronomical images for detecting two stars one Airy ring apart. We also check by numerical simulations that the use of a Laplacian amplitude filter produces less noisy images than the standard digital image processing.
NASA Technical Reports Server (NTRS)
Brand, R. R.; Barker, J. L.
1983-01-01
A multistage sampling procedure using image processing, geographical information systems, and analytical photogrammetry is presented which can be used to guide the collection of representative, high-resolution spectra and discrete reflectance targets for future satellite sensors. The procedure is general and can be adapted to characterize areas as small as minor watersheds and as large as multistate regions. Beginning with a user-determined study area, successive reductions in size and spectral variation are performed using image analysis techniques on data from the Multispectral Scanner, orbital and simulated Thematic Mapper, low altitude photography synchronized with the simulator, and associated digital data. An integrated image-based geographical information system supports processing requirements.
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Statistical Signal Models and Algorithms for Image Analysis
1984-10-25
In this report, two-dimensional stochastic linear models are used in developing algorithms for image analysis such as classification, segmentation, and object detection in images characterized by textured backgrounds. These models generate two-dimensional random processes as outputs to which statistical inference procedures can naturally be applied. A common thread throughout our algorithms is the interpretation of the inference procedures in terms of linear prediction
Image processing and products for the Magellan mission to Venus
NASA Technical Reports Server (NTRS)
Clark, Jerry; Alexander, Doug; Andres, Paul; Lewicki, Scott; Mcauley, Myche
1992-01-01
The Magellan mission to Venus is providing planetary scientists with massive amounts of new data about the surface geology of Venus. Digital image processing is an integral part of the ground data system that provides data products to the investigators. The mosaicking of synthetic aperture radar (SAR) image data from the spacecraft is being performed at JPL's Multimission Image Processing Laboratory (MIPL). MIPL hosts and supports the Image Data Processing Subsystem (IDPS), which was developed in a VAXcluster environment of hardware and software that includes optical disk jukeboxes and the TAE-VICAR (Transportable Applications Executive-Video Image Communication and Retrieval) system. The IDPS is being used by processing analysts of the Image Data Processing Team to produce the Magellan image data products. Various aspects of the image processing procedure are discussed.
Automation of disbond detection in aircraft fuselage through thermal image processing
NASA Technical Reports Server (NTRS)
Prabhu, D. R.; Winfree, W. P.
1992-01-01
A procedure for interpreting thermal images obtained during the nondestructive evaluation of aircraft bonded joints is presented. The procedure operates on time-derivative thermal images and resulted in a disbond image with disbonds highlighted. The size of the 'black clusters' in the output disbond image is a quantitative measure of disbond size. The procedure is illustrated using simulation data as well as data obtained through experimental testing of fabricated samples and aircraft panels. Good results are obtained, and, except in pathological cases, 'false calls' in the cases studied appeared only as noise in the output disbond image which was easily filtered out. The thermal detection technique coupled with an automated image interpretation capability will be a very fast and effective method for inspecting bonded joints in an aircraft structure.
Applying industrial engineering practices to radiology.
Rosen, Len
2004-01-01
Seven hospitals in Oregon and Washington have successfully adopted the Toyota Production System (TPS). Developed by Taiichi Ohno, TPS focuses on finding efficiencies and cost savings in manufacturing processes. A similar effort has occurred in Canada, where Toronto's Hospital for Sick Children has developed a database for its diagnostic imaging department built on the principles of TPS applied to patient encounters. Developed over the last 5 years, the database currently manages all interventional patient procedures for quality assurance, inventory, equipment, and labor. By applying industrial engineering methodology to manufacturing processes, it is possible to manage these constraints, eliminate the obstacles to achieving streamlined processes, and keep the cost of delivering products and services under control. Industrial engineering methodology has encouraged all stakeholders in manufacturing plants to become participants in dealing with constraints. It has empowered those on the shop floor as well as management to become partners in the change process. Using a manufacturing process model to organize patient procedures enables imaging department and imaging centers to generate reports that can help them understand utilization of labor, materials, equipment, and rooms. Administrators can determine the cost of individual procedures as well as the total and average cost of specific procedure types. When Toronto's Hospital for Sick Children first implemented industrial engineering methodology to medical imaging interventional radiology patient encounters, it focused on materials management. Early in the process, the return on investment became apparent as the department improved its management of more than 500,000 dollars of inventory. The calculated accumulated savings over 4 years for 10,000 interventional procedures alone amounted to more than 140,000 dollars. The medical imaging department in this hospital is only now beginning to apply what it has learned to other factors contributing to case cost. It has started to analyze its service contracts with equipment vendors. The department also is accumulating data to measure room, equipment, and labor utilization. The hospital now has a true picture of the real cost associated with each patient encounter in medical imaging. It can now begin to manage case costs, perform better capacity planning, create more effective relationships with its material suppliers, and optimize scheduling of patients and staff.
Digital processing of the Mariner 10 images of Venus and Mercury
NASA Technical Reports Server (NTRS)
Soha, J. M.; Lynn, D. J.; Mosher, J. A.; Elliot, D. A.
1977-01-01
An extensive effort was devoted to the digital processing of the Mariner 10 images of Venus and Mercury at the Image Processing Laboratory of the Jet Propulsion Laboratory. This effort was designed to optimize the display of the considerable quantity of information contained in the images. Several image restoration, enhancement, and transformation procedures were applied; examples of these techniques are included. A particular task was the construction of large mosaics which characterize the surface of Mercury and the atmospheric structure of Venus.
Architecture of the parallel hierarchical network for fast image recognition
NASA Astrophysics Data System (ADS)
Timchenko, Leonid; Wójcik, Waldemar; Kokriatskaia, Natalia; Kutaev, Yuriy; Ivasyuk, Igor; Kotyra, Andrzej; Smailova, Saule
2016-09-01
Multistage integration of visual information in the brain allows humans to respond quickly to most significant stimuli while maintaining their ability to recognize small details in the image. Implementation of this principle in technical systems can lead to more efficient processing procedures. The multistage approach to image processing includes main types of cortical multistage convergence. The input images are mapped into a flexible hierarchy that reflects complexity of image data. Procedures of the temporal image decomposition and hierarchy formation are described in mathematical expressions. The multistage system highlights spatial regularities, which are passed through a number of transformational levels to generate a coded representation of the image that encapsulates a structure on different hierarchical levels in the image. At each processing stage a single output result is computed to allow a quick response of the system. The result is presented as an activity pattern, which can be compared with previously computed patterns on the basis of the closest match. With regard to the forecasting method, its idea lies in the following. In the results synchronization block, network-processed data arrive to the database where a sample of most correlated data is drawn using service parameters of the parallel-hierarchical network.
Bidgood, W D; Bray, B; Brown, N; Mori, A R; Spackman, K A; Golichowski, A; Jones, R H; Korman, L; Dove, B; Hildebrand, L; Berg, M
1999-01-01
To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. The authors introduce the notion of "image acquisition context," the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries.
A new programming metaphor for image processing procedures
NASA Technical Reports Server (NTRS)
Smirnov, O. M.; Piskunov, N. E.
1992-01-01
Most image processing systems, besides an Application Program Interface (API) which lets users write their own image processing programs, also feature a higher level of programmability. Traditionally, this is a command or macro language, which can be used to build large procedures (scripts) out of simple programs or commands. This approach, a legacy of the teletypewriter has serious drawbacks. A command language is clumsy when (and if! it attempts to utilize the capabilities of a multitasking or multiprocessor environment, it is but adequate for real-time data acquisition and processing, it has a fairly steep learning curve, and the user interface is very inefficient,. especially when compared to a graphical user interface (GUI) that systems running under Xll or Windows should otherwise be able to provide. ll these difficulties stem from one basic problem: a command language is not a natural metaphor for an image processing procedure. A more natural metaphor - an image processing factory is described in detail. A factory is a set of programs (applications) that execute separate operations on images, connected by pipes that carry data (images and parameters) between them. The programs function concurrently, processing images as they arrive along pipes, and querying the user for whatever other input they need. From the user's point of view, programming (constructing) factories is a lot like playing with LEGO blocks - much more intuitive than writing scripts. Focus is on some of the difficulties of implementing factory support, most notably the design of an appropriate API. It also shows that factories retain all the functionality of a command language (including loops and conditional branches), while suffering from none of the drawbacks outlined above. Other benefits of factory programming include self-tuning factories and the process of encapsulation, which lets a factory take the shape of a standard application both from the system and the user's point of view, and thus be used as a component of other factories. A bare-bones prototype of factory programming was implemented under the PcIPS image processing system, and a complete version (on a multitasking platform) is under development.
A robust hidden Markov Gauss mixture vector quantizer for a noisy source.
Pyun, Kyungsuk Peter; Lim, Johan; Gray, Robert M
2009-07-01
Noise is ubiquitous in real life and changes image acquisition, communication, and processing characteristics in an uncontrolled manner. Gaussian noise and Salt and Pepper noise, in particular, are prevalent in noisy communication channels, camera and scanner sensors, and medical MRI images. It is not unusual for highly sophisticated image processing algorithms developed for clean images to malfunction when used on noisy images. For example, hidden Markov Gauss mixture models (HMGMM) have been shown to perform well in image segmentation applications, but they are quite sensitive to image noise. We propose a modified HMGMM procedure specifically designed to improve performance in the presence of noise. The key feature of the proposed procedure is the adjustment of covariance matrices in Gauss mixture vector quantizer codebooks to minimize an overall minimum discrimination information distortion (MDI). In adjusting covariance matrices, we expand or shrink their elements based on the noisy image. While most results reported in the literature assume a particular noise type, we propose a framework without assuming particular noise characteristics. Without denoising the corrupted source, we apply our method directly to the segmentation of noisy sources. We apply the proposed procedure to the segmentation of aerial images with Salt and Pepper noise and with independent Gaussian noise, and we compare our results with those of the median filter restoration method and the blind deconvolution-based method, respectively. We show that our procedure has better performance than image restoration-based techniques and closely matches to the performance of HMGMM for clean images in terms of both visual segmentation results and error rate.
Real-Time flare detection using guided filter
NASA Astrophysics Data System (ADS)
Lin, Jiaben; Deng, Yuanyong; Yuan, Fei; Guo, Juan
2017-04-01
A procedure is introduced for the automatic detection of solar flare using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. And then we adopt guided filter, which is first introduced into the astronomical image detection, to enhance the edges of flares and restrain the solar limb darkening. Flares are then detected by modified Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedure, the new procedure has some advantages such as real time and reliability as well as no need of image division and local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result of flares detection shows that the number of flares detected by our procedure is well consistent with the manual one.
NASA Technical Reports Server (NTRS)
1983-01-01
The approach pictures taken by the Viking 1 and Viking 2 spacecrafts two days before their Mars orbital insertion maneuvers were analyzed in order to search for new satellites within the orbit of Phobos. To accomplish this task, search procedure and analysis strategy were formulated, developed and executed using the substantial image processing capabilities of the Image Processing Laboratory at the Jet Propulsion Laboratory. The development of these new search capabilities should prove to be valuable to NASA in processing of image data obtained from other spacecraft missions. The result of applying the search procedures to the Viking approach pictures was as follows: no new satellites of comparable size (approx. 20 km) and brightness to Phobos or Demios were detected within the orbit of Phobos.
Onofrey, John A.; Staib, Lawrence H.; Papademetris, Xenophon
2015-01-01
This paper describes a framework for learning a statistical model of non-rigid deformations induced by interventional procedures. We make use of this learned model to perform constrained non-rigid registration of pre-procedural and post-procedural imaging. We demonstrate results applying this framework to non-rigidly register post-surgical computed tomography (CT) brain images to pre-surgical magnetic resonance images (MRIs) of epilepsy patients who had intra-cranial electroencephalography electrodes surgically implanted. Deformations caused by this surgical procedure, imaging artifacts caused by the electrodes, and the use of multi-modal imaging data make non-rigid registration challenging. Our results show that the use of our proposed framework to constrain the non-rigid registration process results in significantly improved and more robust registration performance compared to using standard rigid and non-rigid registration methods. PMID:26900569
A semi-automated image analysis procedure for in situ plankton imaging systems.
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups.
A Semi-Automated Image Analysis Procedure for In Situ Plankton Imaging Systems
Bi, Hongsheng; Guo, Zhenhua; Benfield, Mark C.; Fan, Chunlei; Ford, Michael; Shahrestani, Suzan; Sieracki, Jeffery M.
2015-01-01
Plankton imaging systems are capable of providing fine-scale observations that enhance our understanding of key physical and biological processes. However, processing the large volumes of data collected by imaging systems remains a major obstacle for their employment, and existing approaches are designed either for images acquired under laboratory controlled conditions or within clear waters. In the present study, we developed a semi-automated approach to analyze plankton taxa from images acquired by the ZOOplankton VISualization (ZOOVIS) system within turbid estuarine waters, in Chesapeake Bay. When compared to images under laboratory controlled conditions or clear waters, images from highly turbid waters are often of relatively low quality and more variable, due to the large amount of objects and nonlinear illumination within each image. We first customized a segmentation procedure to locate objects within each image and extracted them for classification. A maximally stable extremal regions algorithm was applied to segment large gelatinous zooplankton and an adaptive threshold approach was developed to segment small organisms, such as copepods. Unlike the existing approaches for images acquired from laboratory, controlled conditions or clear waters, the target objects are often the majority class, and the classification can be treated as a multi-class classification problem. We customized a two-level hierarchical classification procedure using support vector machines to classify the target objects (< 5%), and remove the non-target objects (> 95%). First, histograms of oriented gradients feature descriptors were constructed for the segmented objects. In the first step all non-target and target objects were classified into different groups: arrow-like, copepod-like, and gelatinous zooplankton. Each object was passed to a group-specific classifier to remove most non-target objects. After the object was classified, an expert or non-expert then manually removed the non-target objects that could not be removed by the procedure. The procedure was tested on 89,419 images collected in Chesapeake Bay, and results were consistent with visual counts with >80% accuracy for all three groups. PMID:26010260
[Optimizing histological image data for 3-D reconstruction using an image equalizer].
Roth, A; Melzer, K; Annacker, K; Lipinski, H G; Wiemann, M; Bingmann, D
2002-01-01
Bone cells form a wired network within the extracellular bone matrix. To analyse this complex 3D structure, we employed a confocal fluorescence imaging procedure to visualize live bone cells within their native surrounding. By means of newly developed image processing software, the "Image-Equalizer", we aimed to enhanced the contrast and eliminize artefacts in such a way that cell bodies as well as fine interconnecting processes were visible.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-19
... assist the office in processing your requests. See the SUPPLEMENTARY INFORMATION section for electronic... considerations for standardization of image acquisition, image interpretation methods, and other procedures to help ensure imaging data quality. The draft guidance describes two categories of image acquisition and...
Current Trends in Image Assessment. Working Paper Series.
ERIC Educational Resources Information Center
Fellers, John
Image assessment in higher education and procedures for conducting image assessments are discussed. Image assessment is the process of finding out what others think about an organization. It is proposed that when image assessments are approached objectively, the results can help determine constituent needs, anticipate vocational trends, survey…
Data mining and visualization of average images in a digital hand atlas
NASA Astrophysics Data System (ADS)
Zhang, Aifeng; Gertych, Arkadiusz; Liu, Brent J.; Huang, H. K.
2005-04-01
We have collected a digital hand atlas containing digitized left hand radiographs of normally developed children grouped accordingly by age, sex, and race. A set of features stored in a database reflecting patient's stage of skeletal development has been calculated by automatic image processing procedures. This paper addresses a new concept, "average" image in the digital hand atlas. The "average" reference image in the digital atlas is selected for each of the groups of normal developed children with the best representative skeletal maturity based on bony features. A data mining procedure was designed and applied to find the average image through average feature vector matching. It also provides a temporary solution for the missing feature problem through polynomial regression. As more cases are added to the digital hand atlas, it can grow to provide clinicians accurate reference images to aid the bone age assessment process.
Image-guided endobronchial ultrasound
NASA Astrophysics Data System (ADS)
Higgins, William E.; Zang, Xiaonan; Cheirsilp, Ronnarit; Byrnes, Patrick; Kuhlengel, Trevor; Bascom, Rebecca; Toth, Jennifer
2016-03-01
Endobronchial ultrasound (EBUS) is now recommended as a standard procedure for in vivo verification of extraluminal diagnostic sites during cancer-staging bronchoscopy. Yet, physicians vary considerably in their skills at using EBUS effectively. Regarding existing bronchoscopy guidance systems, studies have shown their effectiveness in the lung-cancer management process. With such a system, a patient's X-ray computed tomography (CT) scan is used to plan a procedure to regions of interest (ROIs). This plan is then used during follow-on guided bronchoscopy. Recent clinical guidelines for lung cancer, however, also dictate using positron emission tomography (PET) imaging for identifying suspicious ROIs and aiding in the cancer-staging process. While researchers have attempted to use guided bronchoscopy systems in tandem with PET imaging and EBUS, no true EBUS-centric guidance system exists. We now propose a full multimodal image-based methodology for guiding EBUS. The complete methodology involves two components: 1) a procedure planning protocol that gives bronchoscope movements appropriate for live EBUS positioning; and 2) a guidance strategy and associated system graphical user interface (GUI) designed for image-guided EBUS. We present results demonstrating the operation of the system.
Processing, mosaicking and management of the Monterey Bay digital sidescan-sonar images
Chavez, P.S.; Isbrecht, J.; Galanis, P.; Gabel, G.L.; Sides, S.C.; Soltesz, D.L.; Ross, Stephanie L.; Velasco, M.G.
2002-01-01
Sidescan-sonar imaging systems with digital capabilities have now been available for approximately 20 years. In this paper we present several of the various digital image processing techniques developed by the U.S. Geological Survey (USGS) and used to apply intensity/radiometric and geometric corrections, as well as enhance and digitally mosaic, sidescan-sonar images of the Monterey Bay region. New software run by a WWW server was designed and implemented to allow very large image data sets, such as the digital mosaic, to be easily viewed interactively, including the ability to roam throughout the digital mosaic at the web site in either compressed or full 1-m resolution. The processing is separated into the two different stages: preprocessing and information extraction. In the preprocessing stage, sensor-specific algorithms are applied to correct for both geometric and intensity/radiometric distortions introduced by the sensor. This is followed by digital mosaicking of the track-line strips into quadrangle format which can be used as input to either visual or digital image analysis and interpretation. An automatic seam removal procedure was used in combination with an interactive digital feathering/stenciling procedure to help minimize tone or seam matching problems between image strips from adjacent track-lines. The sidescan-sonar image processing package is part of the USGS Mini Image Processing System (MIPS) and has been designed to process data collected by any 'generic' digital sidescan-sonar imaging system. The USGS MIPS software, developed over the last 20 years as a public domain package, is available on the WWW at: http://terraweb.wr.usgs.gov/trs/software.html.
Markov Processes in Image Processing
NASA Astrophysics Data System (ADS)
Petrov, E. P.; Kharina, N. L.
2018-05-01
Digital images are used as an information carrier in different sciences and technologies. The aspiration to increase the number of bits in the image pixels for the purpose of obtaining more information is observed. In the paper, some methods of compression and contour detection on the basis of two-dimensional Markov chain are offered. Increasing the number of bits on the image pixels will allow one to allocate fine object details more precisely, but it significantly complicates image processing. The methods of image processing do not concede by the efficiency to well-known analogues, but surpass them in processing speed. An image is separated into binary images, and processing is carried out in parallel with each without an increase in speed, when increasing the number of bits on the image pixels. One more advantage of methods is the low consumption of energy resources. Only logical procedures are used and there are no computing operations. The methods can be useful in processing images of any class and assignment in processing systems with a limited time and energy resources.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
GEOMETRIC PROCESSING OF DIGITAL IMAGES OF THE PLANETS.
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformations of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases.
Three-dimensional image contrast using biospeckle
NASA Astrophysics Data System (ADS)
Godinho, Robson Pierangeli; Braga, Roberto A., Jr.
2010-09-01
The biospeckle laser (BSL) has been applied in many areas of knowledge and a variety of approaches has been presented to address the best results in biological and non-biological samples, in fast or slow activities, or else in defined flow of materials or in random activities. The methodologies accounted in the literature consider the apparatus used in the image assembling and the way the collected data is processed. The image processing steps presents in turn a variety of procedures with first or second order statistics analysis, and as well with different sizes of data collected. One way to access the biospeckle in defined flow, such as in capillary blood flow in alive animals, was the adoption of the image contrast technique which uses only one image from the illuminated sample. That approach presents some problems related to the resolution of the image, which is reduced during the image contrast processing. In order to help the visualization of the low resolution image formed by the contrast technique, this work presents the three-dimensional procedure as a reliable alternative to enhance the final image. The work based on a parallel processing, with the generation of a virtual map of amplitudes, and maintaining the quasi-online characteristic of the contrast technique. Therefore, it was possible to generate in the same display the observed material, the image contrast result and in addiction the three-dimensional image with adjustable options of rotation. The platform also offers to the user the possibility to access the 3D image offline.
Auto-SEIA: simultaneous optimization of image processing and machine learning algorithms
NASA Astrophysics Data System (ADS)
Negro Maggio, Valentina; Iocchi, Luca
2015-02-01
Object classification from images is an important task for machine vision and it is a crucial ingredient for many computer vision applications, ranging from security and surveillance to marketing. Image based object classification techniques properly integrate image processing and machine learning (i.e., classification) procedures. In this paper we present a system for automatic simultaneous optimization of algorithms and parameters for object classification from images. More specifically, the proposed system is able to process a dataset of labelled images and to return a best configuration of image processing and classification algorithms and of their parameters with respect to the accuracy of classification. Experiments with real public datasets are used to demonstrate the effectiveness of the developed system.
Using normalization 3D model for automatic clinical brain quantative analysis and evaluation
NASA Astrophysics Data System (ADS)
Lin, Hong-Dun; Yao, Wei-Jen; Hwang, Wen-Ju; Chung, Being-Tau; Lin, Kang-Ping
2003-05-01
Functional medical imaging, such as PET or SPECT, is capable of revealing physiological functions of the brain, and has been broadly used in diagnosing brain disorders by clinically quantitative analysis for many years. In routine procedures, physicians manually select desired ROIs from structural MR images and then obtain physiological information from correspondent functional PET or SPECT images. The accuracy of quantitative analysis thus relies on that of the subjectively selected ROIs. Therefore, standardizing the analysis procedure is fundamental and important in improving the analysis outcome. In this paper, we propose and evaluate a normalization procedure with a standard 3D-brain model to achieve precise quantitative analysis. In the normalization process, the mutual information registration technique was applied for realigning functional medical images to standard structural medical images. Then, the standard 3D-brain model that shows well-defined brain regions was used, replacing the manual ROIs in the objective clinical analysis. To validate the performance, twenty cases of I-123 IBZM SPECT images were used in practical clinical evaluation. The results show that the quantitative analysis outcomes obtained from this automated method are in agreement with the clinical diagnosis evaluation score with less than 3% error in average. To sum up, the method takes advantage of obtaining precise VOIs, information automatically by well-defined standard 3-D brain model, sparing manually drawn ROIs slice by slice from structural medical images in traditional procedure. That is, the method not only can provide precise analysis results, but also improve the process rate for mass medical images in clinical.
Budak, Umit; Şengür, Abdulkadir; Guo, Yanhui; Akbulut, Yaman
2017-12-01
Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging.
Automatic detection of solar features in HSOS full-disk solar images using guided filter
NASA Astrophysics Data System (ADS)
Yuan, Fei; Lin, Jiaben; Guo, Jingjing; Wang, Gang; Tong, Liyue; Zhang, Xinwei; Wang, Bingxiang
2018-02-01
A procedure is introduced for the automatic detection of solar features using full-disk solar images from Huairou Solar Observing Station (HSOS), National Astronomical Observatories of China. In image preprocessing, median filter is applied to remove the noises. Guided filter is adopted to enhance the edges of solar features and restrain the solar limb darkening, which is first introduced into the astronomical target detection. Then specific features are detected by Otsu algorithm and further threshold processing technique. Compared with other automatic detection procedures, our procedure has some advantages such as real time and reliability as well as no need of local threshold. Also, it reduces the amount of computation largely, which is benefited from the efficient guided filter algorithm. The procedure has been tested on one month sequences (December 2013) of HSOS full-disk solar images and the result shows that the number of features detected by our procedure is well consistent with the manual one.
Vectorized Rebinning Algorithm for Fast Data Down-Sampling
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Jeffrey
2013-01-01
A vectorized rebinning (down-sampling) algorithm, applicable to N-dimensional data sets, has been developed that offers a significant reduction in computer run time when compared to conventional rebinning algorithms. For clarity, a two-dimensional version of the algorithm is discussed to illustrate some specific details of the algorithm content, and using the language of image processing, 2D data will be referred to as "images," and each value in an image as a "pixel." The new approach is fully vectorized, i.e., the down-sampling procedure is done as a single step over all image rows, and then as a single step over all image columns. Data rebinning (or down-sampling) is a procedure that uses a discretely sampled N-dimensional data set to create a representation of the same data, but with fewer discrete samples. Such data down-sampling is fundamental to digital signal processing, e.g., for data compression applications.
Designing Image Analysis Pipelines in Light Microscopy: A Rational Approach.
Arganda-Carreras, Ignacio; Andrey, Philippe
2017-01-01
With the progress of microscopy techniques and the rapidly growing amounts of acquired imaging data, there is an increased need for automated image processing and analysis solutions in biological studies. Each new application requires the design of a specific image analysis pipeline, by assembling a series of image processing operations. Many commercial or free bioimage analysis software are now available and several textbooks and reviews have presented the mathematical and computational fundamentals of image processing and analysis. Tens, if not hundreds, of algorithms and methods have been developed and integrated into image analysis software, resulting in a combinatorial explosion of possible image processing sequences. This paper presents a general guideline methodology to rationally address the design of image processing and analysis pipelines. The originality of the proposed approach is to follow an iterative, backwards procedure from the target objectives of analysis. The proposed goal-oriented strategy should help biologists to better apprehend image analysis in the context of their research and should allow them to efficiently interact with image processing specialists.
A Focusing Method in the Calibration Process of Image Sensors Based on IOFBs
Fernández, Pedro R.; Lázaro, José L.; Gardel, Alfredo; Cano, Ángel E.; Bravo, Ignacio
2010-01-01
A focusing procedure in the calibration process of image sensors based on Incoherent Optical Fiber Bundles (IOFBs) is described using the information extracted from fibers. These procedures differ from any other currently known focusing method due to the non spatial in-out correspondence between fibers, which produces a natural codification of the image to transmit. Focus measuring is essential prior to carrying out calibration in order to guarantee accurate processing and decoding. Four algorithms have been developed to estimate the focus measure; two methods based on mean grey level, and the other two based on variance. In this paper, a few simple focus measures are defined and compared. Some experimental results referred to the focus measure and the accuracy of the developed methods are discussed in order to demonstrate its effectiveness. PMID:22315526
Watanabe, Yuuki; Takahashi, Yuhei; Numazawa, Hiroshi
2014-02-01
We demonstrate intensity-based optical coherence tomography (OCT) angiography using the squared difference of two sequential frames with bulk-tissue-motion (BTM) correction. This motion correction was performed by minimization of the sum of the pixel values using axial- and lateral-pixel-shifted structural OCT images. We extract the BTM-corrected image from a total of 25 calculated OCT angiographic images. Image processing was accelerated by a graphics processing unit (GPU) with many stream processors to optimize the parallel processing procedure. The GPU processing rate was faster than that of a line scan camera (46.9 kHz). Our OCT system provides the means of displaying structural OCT images and BTM-corrected OCT angiographic images in real time.
Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality
NASA Astrophysics Data System (ADS)
Lee, I.-C.; Tsai, F.
2015-05-01
A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The results presented in this paper demonstrate the potential of using panoramic images to generate 3D point clouds and 3D models. However, it is currently a manual and labor-intensive process. A research is being carried out to Increase the degree of automation of these procedures.
NASA Technical Reports Server (NTRS)
Westmoreland, Sally; Stow, Douglas A.
1992-01-01
A framework is proposed for analyzing ancillary data and developing procedures for incorporating ancillary data to aid interactive identification of land-use categories in land-use updates. The procedures were developed for use within an integrated image processsing/geographic information systems (GIS) that permits simultaneous display of digital image data with the vector land-use data to be updated. With such systems and procedures, automated techniques are integrated with visual-based manual interpretation to exploit the capabilities of both. The procedural framework developed was applied as part of a case study to update a portion of the land-use layer in a regional scale GIS. About 75 percent of the area in the study site that experienced a change in land use was correctly labeled into 19 categories using the combination of automated and visual interpretation procedures developed in the study.
Mendikute, Alberto; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-01-01
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g., 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras. PMID:28891946
Mendikute, Alberto; Yagüe-Fabra, José A; Zatarain, Mikel; Bertelsen, Álvaro; Leizea, Ibai
2017-09-09
Photogrammetry methods are being used more and more as a 3D technique for large scale metrology applications in industry. Optical targets are placed on an object and images are taken around it, where measuring traceability is provided by precise off-process pre-calibrated digital cameras and scale bars. According to the 2D target image coordinates, target 3D coordinates and camera views are jointly computed. One of the applications of photogrammetry is the measurement of raw part surfaces prior to its machining. For this application, post-process bundle adjustment has usually been adopted for computing the 3D scene. With that approach, a high computation time is observed, leading in practice to time consuming and user dependent iterative review and re-processing procedures until an adequate set of images is taken, limiting its potential for fast, easy-to-use, and precise measurements. In this paper, a new efficient procedure is presented for solving the bundle adjustment problem in portable photogrammetry. In-process bundle computing capability is demonstrated on a consumer grade desktop PC, enabling quasi real time 2D image and 3D scene computing. Additionally, a method for the self-calibration of camera and lens distortion has been integrated into the in-process approach due to its potential for highest precision when using low cost non-specialized digital cameras. Measurement traceability is set only by scale bars available in the measuring scene, avoiding the uncertainty contribution of off-process camera calibration procedures or the use of special purpose calibration artifacts. The developed self-calibrated in-process photogrammetry has been evaluated both in a pilot case scenario and in industrial scenarios for raw part measurement, showing a total in-process computing time typically below 1 s per image up to a maximum of 2 s during the last stages of the computed industrial scenes, along with a relative precision of 1/10,000 (e.g. 0.1 mm error in 1 m) with an error RMS below 0.2 pixels at image plane, ranging at the same performance reported for portable photogrammetry with precise off-process pre-calibrated cameras.
Statistical Properties of a Two-Stage Procedure for Creating Sky Flats
NASA Astrophysics Data System (ADS)
Crawford, R. W.; Trueblood, M.
2004-05-01
Accurate flat fielding is an essential factor in image calibration and good photometry, yet no single method for creating flat fields is both practical and effective in all cases. At Winer Observatory, robotic telescope opera- tion and the research program of Near Earth Object follow-up astrometry favor the use of sky flats formed from the many images that are acquired during a night. This paper reviews the statistical properties of the median-combine process used to create sky flats and discusses a computationally efficient procedure for two-stage combining of many images to form sky flats with relatively high signal-to-noise ratio (SNR). This procedure is in use at Winer for the flat field calibration of unfiltered images taken for NEO follow-up astrometry.
Evaluation of a Noise Reduction Procedure for Chest Radiography
Fukui, Ryohei; Ishii, Rie; Kodani, Kazuhiko; Kanasaki, Yoshiko; Suyama, Hisashi; Watanabe, Masanari; Nakamoto, Masaki; Fukuoka, Yasushi
2013-01-01
Background The aim of this study was to evaluate the usefulness of noise reduction procedure (NRP), a function in the new image processing for chest radiography. Methods A CXDI-50G Portable Digital Radiography System (Canon) was used for X-ray detection. Image noise was analyzed with a noise power spectrum (NPS) and a burger phantom was used for evaluation of density resolution. The usefulness of NRP was evaluated by chest phantom images and clinical chest radiography. We employed the Bureau of Radiological Health Method for scoring chest images while carrying out our observations. Results NPS through the use of NRP was improved compared with conventional image processing (CIP). The results in image quality showed high-density resolution through the use of NRP, so that chest radiography examination can be performed with a low dose of radiation. Scores were significantly higher than for CIP. Conclusion In this study, use of NRP led to a high evaluation in these so we are able to confirm the usefulness of NRP for clinical chest radiography. PMID:24574577
Pitfalls in classical nuclear medicine: myocardial perfusion imaging
NASA Astrophysics Data System (ADS)
Fragkaki, C.; Giannopoulou, Ch
2011-09-01
Scintigraphic imaging is a complex functional procedure subject to a variety of artefacts and pitfalls that may limit its clinical and diagnostic accuracy. It is important to be aware of and to recognize them when present and to eliminate them whenever possible. Pitfalls may occur at any stage of the imaging procedure and can be related with the γ-camera or other equipment, personnel handling, patient preparation, image processing or the procedure itself. Often, potential causes of artefacts and pitfalls may overlap. In this short review, special interest will be given to cardiac scintigraphic imaging. Most common causes of artefact in myocardial perfusion imaging are soft tissue attenuation as well as motion and gating errors. Additionally, clinical problems like cardiac abnormalities may cause interpretation pitfalls and nuclear medicine physicians should be familiar with these in order to ensure the correct evaluation of the study. Artefacts or suboptimal image quality can also result from infiltrated injections, misalignment in patient positioning, power instability or interruption, flood field non-uniformities, cracked crystal and several other technical reasons.
An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues
Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis
2011-01-01
Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966
Image processing system performance prediction and product quality evaluation
NASA Technical Reports Server (NTRS)
Stein, E. K.; Hammill, H. B. (Principal Investigator)
1976-01-01
The author has identified the following significant results. A new technique for image processing system performance prediction and product quality evaluation was developed. It was entirely objective, quantitative, and general, and should prove useful in system design and quality control. The technique and its application to determination of quality control procedures for the Earth Resources Technology Satellite NASA Data Processing Facility are described.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
Singha, Kaminit; Day-Lewis, Frederick D.; Johnson, Tim B.; Slater, Lee D.
2015-01-01
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Advances in interpretation of subsurface processes with time-lapse electrical imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singha, Kamini; Day-Lewis, Frederick D.; Johnson, Timothy C.
2015-03-15
Electrical geophysical methods, including electrical resistivity, time-domain induced polarization, and complex resistivity, have become commonly used to image the near subsurface. Here, we outline their utility for time-lapse imaging of hydrological, geochemical, and biogeochemical processes, focusing on new instrumentation, processing, and analysis techniques specific to monitoring. We review data collection procedures, parameters measured, and petrophysical relationships and then outline the state of the science with respect to inversion methodologies, including coupled inversion. We conclude by highlighting recent research focused on innovative applications of time-lapse imaging in hydrology, biology, ecology, and geochemistry, among other areas of interest.
Constraint processing in our extensible language for cooperative imaging system
NASA Astrophysics Data System (ADS)
Aoki, Minoru; Murao, Yo; Enomoto, Hajime
1996-02-01
The extensible WELL (Window-based elaboration language) has been developed using the concept of common platform, where both client and server can communicate with each other with support from a communication manager. This extensible language is based on an object oriented design by introducing constraint processing. Any kind of services including imaging in the extensible language is controlled by the constraints. Interactive functions between client and server are extended by introducing agent functions including a request-respond relation. Necessary service integrations are satisfied with some cooperative processes using constraints. Constraints are treated similarly to data, because the system should have flexibilities in the execution of many kinds of services. The similar control process is defined by using intentional logic. There are two kinds of constraints, temporal and modal constraints. Rendering the constraints, the predicate format as the relation between attribute values can be a warrant for entities' validity as data. As an imaging example, a processing procedure of interaction between multiple objects is shown as an image application for the extensible system. This paper describes how the procedure proceeds in the system, and that how the constraints work for generating moving pictures.
NASA Technical Reports Server (NTRS)
Lillesand, T. M.; Meisner, D. E. (Principal Investigator)
1980-01-01
An investigation was conducted into ways to improve the involvement of state and local user personnel in the digital image analysis process by isolating those elements of the analysis process which require extensive involvement by field personnel and providing means for performing those activities apart from a computer facility. In this way, the analysis procedure can be converted from a centralized activity focused on a computer facility to a distributed activity in which users can interact with the data at the field office level or in the field itself. A general image processing software was developed on the University of Minnesota computer system (Control Data Cyber models 172 and 74). The use of color hardcopy image data as a primary medium in supervised training procedures was investigated and digital display equipment and a coordinate digitizer were procured.
Digital enhancement of X-rays for NDT
NASA Technical Reports Server (NTRS)
Butterfield, R. L.
1980-01-01
Report is "cookbook" for digital processing of industrial X-rays. Computer techniques, previously used primarily in laboratory and developmental research, have been outlined and codified into step by step procedures for enhancing X-ray images. Those involved in nondestructive testing should find report valuable asset, particularly is visual inspection is method currently used to process X-ray images.
A Procedure for High Resolution Satellite Imagery Quality Assessment
Crespi, Mattia; De Vendictis, Laura
2009-01-01
Data products generated from High Resolution Satellite Imagery (HRSI) are routinely evaluated during the so-called in-orbit test period, in order to verify if their quality fits the desired features and, if necessary, to obtain the image correction parameters to be used at the ground processing center. Nevertheless, it is often useful to have tools to evaluate image quality also at the final user level. Image quality is defined by some parameters, such as the radiometric resolution and its accuracy, represented by the noise level, and the geometric resolution and sharpness, described by the Modulation Transfer Function (MTF). This paper proposes a procedure to evaluate these image quality parameters; the procedure was implemented in a suitable software and tested on high resolution imagery acquired by the QuickBird, WorldView-1 and Cartosat-1 satellites. PMID:22412312
Intensity dependent spread theory
NASA Technical Reports Server (NTRS)
Holben, Richard
1990-01-01
The Intensity Dependent Spread (IDS) procedure is an image-processing technique based on a model of the processing which occurs in the human visual system. IDS processing is relevant to many aspects of machine vision and image processing. For quantum limited images, it produces an ideal trade-off between spatial resolution and noise averaging, performs edge enhancement thus requiring only mean-crossing detection for the subsequent extraction of scene edges, and yields edge responses whose amplitudes are independent of scene illumination, depending only upon the ratio of the reflectance on the two sides of the edge. These properties suggest that the IDS process may provide significant bandwidth reduction while losing only minimal scene information when used as a preprocessor at or near the image plane.
Ding, George X; Alaei, Parham; Curran, Bruce; Flynn, Ryan; Gossman, Michael; Mackie, T Rock; Miften, Moyed; Morin, Richard; Xu, X George; Zhu, Timothy C
2018-05-01
With radiotherapy having entered the era of image guidance, or image-guided radiation therapy (IGRT), imaging procedures are routinely performed for patient positioning and target localization. The imaging dose delivered may result in excessive dose to sensitive organs and potentially increase the chance of secondary cancers and, therefore, needs to be managed. This task group was charged with: a) providing an overview on imaging dose, including megavoltage electronic portal imaging (MV EPI), kilovoltage digital radiography (kV DR), Tomotherapy MV-CT, megavoltage cone-beam CT (MV-CBCT) and kilovoltage cone-beam CT (kV-CBCT), and b) providing general guidelines for commissioning dose calculation methods and managing imaging dose to patients. We briefly review the dose to radiotherapy (RT) patients resulting from different image guidance procedures and list typical organ doses resulting from MV and kV image acquisition procedures. We provide recommendations for managing the imaging dose, including different methods for its calculation, and techniques for reducing it. The recommended threshold beyond which imaging dose should be considered in the treatment planning process is 5% of the therapeutic target dose. Although the imaging dose resulting from current kV acquisition procedures is generally below this threshold, the ALARA principle should always be applied in practice. Medical physicists should make radiation oncologists aware of the imaging doses delivered to patients under their care. Balancing ALARA with the requirement for effective target localization requires that imaging dose be managed based on the consideration of weighing risks and benefits to the patient. © 2018 American Association of Physicists in Medicine.
Is it possible to eliminate patient identification errors in medical imaging?
Danaher, Luke A; Howells, Joan; Holmes, Penny; Scally, Peter
2011-08-01
The aim of this article is to review a system that validates and documents the process of ensuring the correct patient, correct site and side, and correct procedure (commonly referred to as the 3 C's) within medical imaging. A 4-step patient identification and procedure matching process was developed using health care and aviation models. The process was established in medical imaging departments after a successful interventional radiology pilot program. The success of the project was evaluated using compliance audit data, incident reporting data before and after the implementation of the process, and a staff satisfaction survey. There was 95% to 100% verification of site and side and 100% verification of correct patient, procedure, and consent. Correct patient data and side markers were present in 82% to 95% of cases. The number of incidents before and after the implementation of the 3 C's was difficult to assess because of a change in reporting systems and incident underreporting. More incidents are being reported, particularly "near misses." All near misses were related to incorrect patient identification stickers being placed on request forms. The majority of staff members surveyed found the process easy (55.8%), quick (47.7%), relevant (51.7%), and useful (60.9%). Although identification error is difficult to eliminate, practical initiatives can engender significant systems improvement in complex health care environments. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.
Busse, Harald; Trampel, Robert; Gründer, Wilfried; Moche, Michael; Kahn, Thomas
2007-10-01
To evaluate the feasibility and accuracy of an automated method to determine the 3D position of MR-visible markers. Inductively coupled RF coils were imaged in a whole-body 1.5T scanner using the body coil and two conventional gradient echo sequences (FLASH and TrueFISP) and large imaging volumes up to (300 mm(3)). To minimize background signals, a flip angle of approximately 1 degrees was used. Morphological 2D image processing in orthogonal scan planes was used to determine the 3D positions of a configuration of three fiducial markers (FMC). The accuracies of the marker positions and of the orientation of the plane defined by the FMC were evaluated at various distances r(M) from the isocenter. Fiducial marker detection with conventional equipment (pulse sequences, imaging coils) was very reliable and highly reproducible over a wide range of experimental conditions. For r(M) = 100 mm, the estimated maximum errors in 3D position and angular orientation were 1.7 mm and 0.33 degrees , respectively. For r(M) = 175 mm, the respective values were 2.9 mm and 0.44 degrees . Detection and localization of MR-visible markers by morphological image processing is feasible, simple, and very accurate. In combination with safe wireless markers, the method is found to be useful for image-guided procedures. (c) 2007 Wiley-Liss, Inc.
CT and MRI slice separation evaluation by LabView developed software.
Acri, Giuseppe; Testagrossa, Barbara; Sestito, Angela; Bonanno, Lilla; Vermiglio, Giuseppe
2018-02-01
The efficient use of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) equipment necessitates establishing adequate quality-control (QC) procedures. In particular, the accuracy of slice separation, during multislices acquisition, requires scan exploration of phantoms containing test objects. To simplify such procedures, a novel phantom and a computerised LabView-based procedure have been devised, enabling determination the midpoint of full width at half maximum (FWHM) in real time while the distance from the profile midpoint of two progressive images is evaluated and measured. The results were compared with those obtained by processing the same phantom images with commercial software. To validate the proposed methodology the Fisher test was conducted on the resulting data sets. In all cases, there was no statistically significant variation between the commercial procedure and the LabView one, which can be used on any CT and MRI diagnostic devices. Copyright © 2017. Published by Elsevier GmbH.
Digital holographic 3D imaging spectrometry (a review)
NASA Astrophysics Data System (ADS)
Yoshimori, Kyu
2017-09-01
This paper reviews recent progress in the digital holographic 3D imaging spectrometry. The principle of this method is a marriage of incoherent holography and Fourier transform spectroscopy. Review includes principle, procedure of signal processing and experimental results to obtain a multispectral set of 3D images for spatially incoherent, polychromatic objects.
ERIC Educational Resources Information Center
Huck-Iriart, Cristia´n; De-Candia, Ariel; Rodriguez, Javier; Rinaldi, Carlos
2016-01-01
In this work, we described an image processing procedure for the measurement of surface tension of the air-liquid interface using isothermal capillary action. The experiment, designed for an undergraduate course, is based on the analysis of a series of solutions with diverse surfactant concentrations at different ionic strengths. The objective of…
Imaging windows for long-term intravital imaging
Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco
2014-01-01
Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure. PMID:28243510
Bidgood, W. Dean; Bray, Bruce; Brown, Nicolas; Mori, Angelo Rossi; Spackman, Kent A.; Golichowski, Alan; Jones, Robert H.; Korman, Louis; Dove, Brent; Hildebrand, Lloyd; Berg, Michael
1999-01-01
Objective: To support clinically relevant indexing of biomedical images and image-related information based on the attributes of image acquisition procedures and the judgments (observations) expressed by observers in the process of image interpretation. Design: The authors introduce the notion of “image acquisition context,” the set of attributes that describe image acquisition procedures, and present a standards-based strategy for utilizing the attributes of image acquisition context as indexing and retrieval keys for digital image libraries. Methods: The authors' indexing strategy is based on an interdependent message/terminology architecture that combines the Digital Imaging and Communication in Medicine (DICOM) standard, the SNOMED (Systematized Nomenclature of Human and Veterinary Medicine) vocabulary, and the SNOMED DICOM microglossary. The SNOMED DICOM microglossary provides context-dependent mapping of terminology to DICOM data elements. Results: The capability of embedding standard coded descriptors in DICOM image headers and image-interpretation reports improves the potential for selective retrieval of image-related information. This favorably affects information management in digital libraries. PMID:9925229
Imaging windows for long-term intravital imaging: General overview and technical insights.
Alieva, Maria; Ritsma, Laila; Giedt, Randy J; Weissleder, Ralph; van Rheenen, Jacco
2014-01-01
Intravital microscopy is increasingly used to visualize and quantitate dynamic biological processes at the (sub)cellular level in live animals. By visualizing tissues through imaging windows, individual cells (e.g., cancer, host, or stem cells) can be tracked and studied over a time-span of days to months. Several imaging windows have been developed to access tissues including the brain, superficial fascia, mammary glands, liver, kidney, pancreas, and small intestine among others. Here, we review the development of imaging windows and compare the most commonly used long-term imaging windows for cancer biology: the cranial imaging window, the dorsal skin fold chamber, the mammary imaging window, and the abdominal imaging window. Moreover, we provide technical details, considerations, and trouble-shooting tips on the surgical procedures and microscopy setups for each imaging window and explain different strategies to assure imaging of the same area over multiple imaging sessions. This review aims to be a useful resource for establishing the long-term intravital imaging procedure.
[EYECUBE as 3D multimedia imaging in macular diagnostics].
Hassenstein, Andrea; Scholz, F; Richard, G
2011-11-01
In the new generation of EYECUBE devices, the angiography image and the OCT are included in a 3D illustration as an integration. Other diagnostic procedures such as autofluorescence and ICG can also be correlated to the OCT. The aim was to precisely classify various two-dimensional findings in relation to each other. The new generation of OCT devices enables imaging with a low incidence of motion artefacts with very good fundus image quality - and with that, permits a largely automatic classification. The feature enabling the integration of the EYECUBE was further developed with new software, so that not only the topographic image (red-free, autofluorescence) can be correlated to the Cirrus OCT, but also all other findings gathered within the same time frame can be correlated to each other. These were brightened and projected onto the cube surface in a defined interval. The imaging procedures can be selected in a menu toolbar. Topographic volumetry OCT images can be overlayed. The practical application of the new method was tested on patients with macular disorders. By lightening up the results from various diagnostic procedures, it is possible of late to directly compare pathologies to each other and to the OCT results. In all patients (n = 45 eyes) with good single-image quality, the automated integration into the EYECUBE was possible (to a great extent). The application is not dependent on a certain type of device used in the procedures performed. The increasing level of precision in imaging procedures and the handling of large data volumes has led to the possibility of examining each macular diagnostics procedure from the comparative perspective: imaging (photo) with perfusion (FLA, ICG) and morphology (OCT). The exclusion of motion artefacts and the reliable scan position in the course of the imaging process increases the informative value of OCT. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Zabarylo, U.; Minet, O.
2010-01-01
Investigations on the application of optical procedures for the diagnosis of rheumatism using scattered light images are only at the beginning both in terms of new image-processing methods and subsequent clinical application. For semi-automatic diagnosis using laser light, the multispectral scattered light images are registered and overlapped to pseudo-coloured images, which depict diagnostically essential contents by visually highlighting pathological changes.
Filipovic, Nenad D.
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration. PMID:28611851
Milankovic, Ivan L; Mijailovic, Nikola V; Filipovic, Nenad D; Peulic, Aleksandar S
2017-01-01
Image segmentation is one of the most common procedures in medical imaging applications. It is also a very important task in breast cancer detection. Breast cancer detection procedure based on mammography can be divided into several stages. The first stage is the extraction of the region of interest from a breast image, followed by the identification of suspicious mass regions, their classification, and comparison with the existing image database. It is often the case that already existing image databases have large sets of data whose processing requires a lot of time, and thus the acceleration of each of the processing stages in breast cancer detection is a very important issue. In this paper, the implementation of the already existing algorithm for region-of-interest based image segmentation for mammogram images on High-Performance Reconfigurable Dataflow Computers (HPRDCs) is proposed. As a dataflow engine (DFE) of such HPRDC, Maxeler's acceleration card is used. The experiments for examining the acceleration of that algorithm on the Reconfigurable Dataflow Computers (RDCs) are performed with two types of mammogram images with different resolutions. There were, also, several DFE configurations and each of them gave a different acceleration value of algorithm execution. Those acceleration values are presented and experimental results showed good acceleration.
NASA Astrophysics Data System (ADS)
Kravchenko, Alexandra; Negassa, Wakene; Guber, Andrey; Schmidt, Sonja
2014-05-01
Particulate soil organic matter (POM) is biologically and chemically active fraction of soil organic matter. It is a source of many agricultural and ecological benefits, among which are POM's contribution to C sequestration. Most of conventional research methods for studying organic matter dynamics involve measurements conducted on pre-processed i.e., ground and sieved soil samples. Unfortunately, grinding and sieving completely destroys soil structure, the component crucial for soil functioning and C protection. Importance of a better understanding of the role of soil structure and of the physical protection that it provides to soil C cannot be overstated; and analysis of quantities, characteristics, and decomposition rates of POM in soil samples with intact structure is among the key elements of gaining such understanding. However, a marked difficulty hindering the progress in such analyses is a lack of tools for identification and quantitative analysis of POM in intact soil samples. Recent advancement in applications of X-ray computed micro-tomography (μ-CT) to soil science has given an opportunity to conduct such analyses. The objective of the current study is to develop a procedure for identification and quantitative characterization of POM within intact soil samples using X-ray μ-CT images and to test performance of the proposed procedure on a set of multiple intact soil macro-aggregates. We used 16 4-6 mm soil aggregates collected at 0-15 cm depth from a Typic Hapludalf soil at multiple field sites with diverse agricultural management history. The aggregates have been scanned at SIMBIOS Centre, Dundee, Scotland at 10 micron resolution. POM was determined from the aggregate images using the developed procedure. The procedure was based on combining image pre-processing steps with discriminant analysis classification. The first component of the procedure consisted of image pre-processing steps based on the range of gray values (GV) along with shape and size of POM pieces. That was followed by discriminant analysis conducted using statistical and geostatistical characteristics of POM pieces. POM identified in the intact individual soil aggregates using the proposed procedure was in good agreement with POM measured in the studied aggregates using conventional lab method (R2=0.75). Of particular importance for accurate identification of POM in the images was the information on spatial characteristics of POM's GVs. Since this is the first attempt of POM determination, future work will be needed to explore how the proposed procedure performs under a variety of potentially influential factors, such as POM's origin and decomposition stage, X-ray scanning settings, image filtering and segmentation methods.
Geometric processing of digital images of the planets
NASA Technical Reports Server (NTRS)
Edwards, Kathleen
1987-01-01
New procedures and software have been developed for geometric transformation of images to support digital cartography of the planets. The procedures involve the correction of spacecraft camera orientation of each image with the use of ground control and the transformation of each image to a Sinusoidal Equal-Area map projection with an algorithm which allows the number of transformation calculations to vary as the distortion varies within the image. When the distortion is low in an area of an image, few transformation computations are required, and most pixels can be interpolated. When distortion is extreme, the location of each pixel is computed. Mosaics are made of these images and stored as digital databases. Completed Sinusoidal databases may be used for digital analysis and registration with other spatial data. They may also be reproduced as published image maps by digitally transforming them to appropriate map projections.
Filtering algorithm for dotted interferences
NASA Astrophysics Data System (ADS)
Osterloh, K.; Bücherl, T.; Lierse von Gostomski, Ch.; Zscherpel, U.; Ewert, U.; Bock, S.
2011-09-01
An algorithm has been developed to remove reliably dotted interferences impairing the perceptibility of objects within a radiographic image. This particularly is a major challenge encountered with neutron radiographs collected at the NECTAR facility, Forschungs-Neutronenquelle Heinz Maier-Leibnitz (FRM II): the resulting images are dominated by features resembling a snow flurry. These artefacts are caused by scattered neutrons, gamma radiation, cosmic radiation, etc. all hitting the detector CCD directly in spite of a sophisticated shielding. This makes such images rather useless for further direct evaluations. One approach to resolve this problem of these random effects would be to collect a vast number of single images, to combine them appropriately and to process them with common image filtering procedures. However, it has been shown that, e.g. median filtering, depending on the kernel size in the plane and/or the number of single shots to be combined, is either insufficient or tends to blur sharp lined structures. This inevitably makes a visually controlled processing image by image unavoidable. Particularly in tomographic studies, it would be by far too tedious to treat each single projection by this way. Alternatively, it would be not only more comfortable but also in many cases the only reasonable approach to filter a stack of images in a batch procedure to get rid of the disturbing interferences. The algorithm presented here meets all these requirements. It reliably frees the images from the snowy pattern described above without the loss of fine structures and without a general blurring of the image. It consists of an iterative, within a batch procedure parameter free filtering algorithm aiming to eliminate the often complex interfering artefacts while leaving the original information untouched as far as possible.
Image- and model-based surgical planning in otolaryngology.
Korves, B; Klimek, L; Klein, H M; Mösges, R
1995-10-01
Preoperative evaluation of any operating field is essential for the preparation of surgical procedures. The relationship between pathology and adjacent structures, and anatomically dangerous sites need to be analyzed for the determination of intraoperative action. For the simulation of surgery using three-dimensional imaging or individually manufactured plastic patient models, the authors have worked out different procedures. A total of 481 surgical interventions in the maxillofacial region, paranasal sinuses, orbit, and the anterior and middle skull base, in addition to neurotologic procedures were presurgically simulated using three-dimensional imaging and image manipulation. An intraoperative simulation device, part of the Aachen Computer-Assisted Surgery System, had been applied in 407 of these cases. In seven patients, stereolithography was used to create plastic patient models for the preparation of reconstructive surgery and prostheses fabrication. The disadvantages of this process include time and cost; however, the advantages included (1) a better understanding of the anatomic relationships, (2) the feasibility of presurgical simulation of the prevailing procedure, (3) an improved intraoperative localization accuracy, (4) prostheses fabrication in reconstructive procedures with an approach to more accuracy, (5) permanent recordings for future requirements or reconstructions, and (6) improved residency education.
Quality control and assurance for validation of DOS/I measurements
NASA Astrophysics Data System (ADS)
Cerussi, Albert; Durkin, Amanda; Kwong, Richard; Quang, Timothy; Hill, Brian; Tromberg, Bruce J.; MacKinnon, Nick; Mantulin, William W.
2010-02-01
Ongoing multi-center clinical trials are crucial for Biophotonics to gain acceptance in medical imaging. In these trials, quality control (QC) and assurance (QA) are key to success and provide "data insurance". Quality control and assurance deal with standardization, validation, and compliance of procedures, materials and instrumentation. Specifically, QC/QA involves systematic assessment of testing materials, instrumentation performance, standard operating procedures, data logging, analysis, and reporting. QC and QA are important for FDA accreditation and acceptance by the clinical community. Our Biophotonics research in the Network for Translational Research in Optical Imaging (NTROI) program for breast cancer characterization focuses on QA/QC issues primarily related to the broadband Diffuse Optical Spectroscopy and Imaging (DOS/I) instrumentation, because this is an emerging technology with limited standardized QC/QA in place. In the multi-center trial environment, we implement QA/QC procedures: 1. Standardize and validate calibration standards and procedures. (DOS/I technology requires both frequency domain and spectral calibration procedures using tissue simulating phantoms and reflectance standards, respectively.) 2. Standardize and validate data acquisition, processing and visualization (optimize instrument software-EZDOS; centralize data processing) 3. Monitor, catalog and maintain instrument performance (document performance; modularize maintenance; integrate new technology) 4. Standardize and coordinate trial data entry (from individual sites) into centralized database 5. Monitor, audit and communicate all research procedures (database, teleconferences, training sessions) between participants ensuring "calibration". This manuscript describes our ongoing efforts, successes and challenges implementing these strategies.
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
Textural features for radar image analysis
NASA Technical Reports Server (NTRS)
Shanmugan, K. S.; Narayanan, V.; Frost, V. S.; Stiles, J. A.; Holtzman, J. C.
1981-01-01
Texture is seen as an important spatial feature useful for identifying objects or regions of interest in an image. While textural features have been widely used in analyzing a variety of photographic images, they have not been used in processing radar images. A procedure for extracting a set of textural features for characterizing small areas in radar images is presented, and it is shown that these features can be used in classifying segments of radar images corresponding to different geological formations.
Low-cost digital image processing at the University of Oklahoma
NASA Technical Reports Server (NTRS)
Harrington, J. A., Jr.
1981-01-01
Computer assisted instruction in remote sensing at the University of Oklahoma involves two separate approaches and is dependent upon initial preprocessing of a LANDSAT computer compatible tape using software developed for an IBM 370/158 computer. In-house generated preprocessing algorithms permits students or researchers to select a subset of a LANDSAT scene for subsequent analysis using either general purpose statistical packages or color graphic image processing software developed for Apple II microcomputers. Procedures for preprocessing the data and image analysis using either of the two approaches for low-cost LANDSAT data processing are described.
Initial Results from Fitting Resolved Modes using HMI Intensity Observations
NASA Astrophysics Data System (ADS)
Korzennik, Sylvain G.
2017-08-01
The HMI project recently started processing the continuum intensity images following global helioseismology procedures similar to those used to process the velocity images. The spatial decomposition of these images has produced time series of spherical harmonic coefficients for degrees up to l=300, using a different apodization than the one used for velocity observations. The first 360 days of observations were processed and made available. I present initial results from fitting these time series using my state of the art fitting methodology and compare the derived mode characteristics to those estimated using co-eval velocity observations.
An interactive method for digitizing zone maps
NASA Technical Reports Server (NTRS)
Giddings, L. E.; Thompson, E. J.
1975-01-01
A method is presented for digitizing maps that consist of zones, such as contour or climatic zone maps. A color-coded map is prepared by any convenient process. The map is then read into memory of an Image 100 computer by means of its table scanner, using colored filters. Zones are separated and stored in themes, using standard classification procedures. Thematic data are written on magnetic tape and these data, appropriately coded, are combined to make a digitized image on tape. Step-by-step procedures are given for digitization of crop moisture index maps with this procedure. In addition, a complete example of the digitization of a climatic zone map is given.
Modeling of polychromatic attenuation using computed tomography reconstructed images
NASA Technical Reports Server (NTRS)
Yan, C. H.; Whalen, R. T.; Beaupre, G. S.; Yen, S. Y.; Napel, S.
1999-01-01
This paper presents a procedure for estimating an accurate model of the CT imaging process including spectral effects. As raw projection data are typically unavailable to the end-user, we adopt a post-processing approach that utilizes the reconstructed images themselves. This approach includes errors from x-ray scatter and the nonidealities of the built-in soft tissue correction into the beam characteristics, which is crucial to beam hardening correction algorithms that are designed to be applied directly to CT reconstructed images. We formulate this approach as a quadratic programming problem and propose two different methods, dimension reduction and regularization, to overcome ill conditioning in the model. For the regularization method we use a statistical procedure, Cross Validation, to select the regularization parameter. We have constructed step-wedge phantoms to estimate the effective beam spectrum of a GE CT-I scanner. Using the derived spectrum, we computed the attenuation ratios for the wedge phantoms and found that the worst case modeling error is less than 3% of the corresponding attenuation ratio. We have also built two test (hybrid) phantoms to evaluate the effective spectrum. Based on these test phantoms, we have shown that the effective beam spectrum provides an accurate model for the CT imaging process. Last, we used a simple beam hardening correction experiment to demonstrate the effectiveness of the estimated beam profile for removing beam hardening artifacts. We hope that this estimation procedure will encourage more independent research on beam hardening corrections and will lead to the development of application-specific beam hardening correction algorithms.
Vasan, S N Swetadri; Ionita, Ciprian N; Titus, A H; Cartwright, A N; Bednarek, D R; Rudin, S
2012-02-23
We present the image processing upgrades implemented on a Graphics Processing Unit (GPU) in the Control, Acquisition, Processing, and Image Display System (CAPIDS) for the custom Micro-Angiographic Fluoroscope (MAF) detector. Most of the image processing currently implemented in the CAPIDS system is pixel independent; that is, the operation on each pixel is the same and the operation on one does not depend upon the result from the operation on the other, allowing the entire image to be processed in parallel. GPU hardware was developed for this kind of massive parallel processing implementation. Thus for an algorithm which has a high amount of parallelism, a GPU implementation is much faster than a CPU implementation. The image processing algorithm upgrades implemented on the CAPIDS system include flat field correction, temporal filtering, image subtraction, roadmap mask generation and display window and leveling. A comparison between the previous and the upgraded version of CAPIDS has been presented, to demonstrate how the improvement is achieved. By performing the image processing on a GPU, significant improvements (with respect to timing or frame rate) have been achieved, including stable operation of the system at 30 fps during a fluoroscopy run, a DSA run, a roadmap procedure and automatic image windowing and leveling during each frame.
Code of Federal Regulations, 2014 CFR
2014-07-01
... photocopying of DoD ID cards to facilitate medical care processing, check cashing, voting, tax matters... support CAC issuance, which includes fingerprints and facial images specified in FIPS Publication 201-1... the Office of the USD(AT&L), implement the capability to obtain two segmented images (primary and...
Content standards for medical image metadata
NASA Astrophysics Data System (ADS)
d'Ornellas, Marcos C.; da Rocha, Rafael P.
2003-12-01
Medical images are at the heart of the healthcare diagnostic procedures. They have provided not only a noninvasive mean to view anatomical cross-sections of internal organs but also a mean for physicians to evaluate the patient"s diagnosis and monitor the effects of the treatment. For a Medical Center, the emphasis may shift from the generation of image to post processing and data management since the medical staff may generate even more processed images and other data from the original image after various analyses and post processing. A medical image data repository for health care information system is becoming a critical need. This data repository would contain comprehensive patient records, including information such as clinical data and related diagnostic images, and post-processed images. Due to the large volume and complexity of the data as well as the diversified user access requirements, the implementation of the medical image archive system will be a complex and challenging task. This paper discusses content standards for medical image metadata. In addition it also focuses on the image metadata content evaluation and metadata quality management.
Qualification process of CR system and quantification of digital image quality
NASA Astrophysics Data System (ADS)
Garnier, P.; Hun, L.; Klein, J.; Lemerle, C.
2013-01-01
CEA Valduc uses several X-Ray generators to carry out many inspections: void search, welding expertise, gap measurements, etc. Most of these inspections are carried out on silver based plates. For several years, the CEA/Valduc has decided to qualify new devices such as digital plates or CCD/flat panel plates. On one hand, the choice of this technological orientation is to forecast the assumed and eventual disappearance of silver based plates; on the other hand, it is also to keep our skills mastering up-to-date. The main improvement brought by numerical plates is the continuous progress of the measurement accuracy, especially with image data processing. It is now common to measure defects thickness or depth position within a part. In such applications, data image processing is used to obtain complementary information compared to scanned silver based plates. This scanning procedure is harmful for measurements which imply a data corruption of the resolution, the adding of numerical noise and is time expensive. Digital plates enable to suppress the scanning procedure and to increase resolution. It is nonetheless difficult to define, for digital images, single criteria for the image quality. A procedure has to be defined in order to estimate quality of the digital data itself; the impact of the scanning device and the configuration parameters are also to be taken into account. This presentation deals with the qualification process developed by CEA/Valduc for digital plates (DUR-NDT) based on the study of quantitative criteria chosen to define a direct numerical image quality that could be compared with scanned silver based pictures and the classical optical density. The versatility of the X-Ray parameters is also discussed (X-ray tension, intensity, time exposure). The aim is to be able to transfer the year long experience of CEA/Valduc with silver-based plates inspection to these new digital plates supports. This is an industrial stake.
A new data processing technique for Rayleigh-Taylor instability growth experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuan, Yongteng; Tu, Shaoyong; Miao, Wenyong
Typical face-on experiments for Rayleigh-Taylor instability study involve the time-resolved radiography of an accelerated foil with line-of-sight of the radiography along the direction of motion. The usual method which derives perturbation amplitudes from the face-on images reverses the actual image transmission procedure, so the obtained results will have a large error in the case of large optical depth. In order to improve the accuracy of data processing, a new data processing technique has been developed to process the face-on images. This technique based on convolution theorem, refined solutions of optical depth can be achieved by solving equations. Furthermore, we discussmore » both techniques for image processing, including the influence of modulation transfer function of imaging system and the backlighter spatial profile. Besides, we use the two methods to the process the experimental results in Shenguang-II laser facility and the comparison shows that the new method effectively improve the accuracy of data processing.« less
USDA-ARS?s Scientific Manuscript database
Problems with assessing the efficacy of cleaning and sanitation procedures in delicatessen departments is a recognized food safety concern. Our laboratory demonstrated that cleaning procedures in produce processing plants can be enhanced using a portable fluorescence imaging device. To explore the f...
Wang, Chunliang; Ritter, Felix; Smedby, Orjan
2010-07-01
To enhance the functional expandability of a picture archiving and communication systems (PACS) workstation and to facilitate the integration of third-part image-processing modules, we propose a browser-server style method. In the proposed solution, the PACS workstation shows the front-end user interface defined in an XML file while the image processing software is running in the background as a server. Inter-process communication (IPC) techniques allow an efficient exchange of image data, parameters, and user input between the PACS workstation and stand-alone image-processing software. Using a predefined communication protocol, the PACS workstation developer or image processing software developer does not need detailed information about the other system, but will still be able to achieve seamless integration between the two systems and the IPC procedure is totally transparent to the final user. A browser-server style solution was built between OsiriX (PACS workstation software) and MeVisLab (Image-Processing Software). Ten example image-processing modules were easily added to OsiriX by converting existing MeVisLab image processing networks. Image data transfer using shared memory added <10ms of processing time while the other IPC methods cost 1-5 s in our experiments. The browser-server style communication based on IPC techniques is an appealing method that allows PACS workstation developers and image processing software developers to cooperate while focusing on different interests.
NASA Astrophysics Data System (ADS)
Richards, Lisa M.; Weber, Erica L.; Parthasarathy, Ashwin B.; Kappeler, Kaelyn L.; Fox, Douglas J.; Dunn, Andrew K.
2012-02-01
Monitoring cerebral blood flow (CBF) during neurosurgery can provide important physiological information for a variety of surgical procedures. Although multiple intraoperative vascular monitoring technologies are currently available, a quantitative method that allows for continuous monitoring is still needed. Laser speckle contrast imaging (LSCI) is an optical imaging method with high spatial and temporal resolution that has been widely used to image CBF in animal models in vivo. In this pilot clinical study, we adapted a Zeiss OPMI Pentero neurosurgical microscope to obtain LSCI images by attaching a camera and a laser diode. This LSCI adapted instrument has been used to acquire full field flow images from 10 patients during tumor resection procedures. The patient's ECG was recorded during acquisition and image registration was performed in post-processing to account for pulsatile motion artifacts. Digital photographs confirmed alignment of vasculature and flow images in four cases, and a relative change in blood flow was observed in two patients after bipolar cautery. The LSCI adapted instrument has the capability to produce real-time, full field CBF image maps with excellent spatial resolution and minimal intervention to the surgical procedure. Results from this study demonstrate the feasibility of using LSCI to monitor blood flow during neurosurgery.
Manavella, Valeria; Romano, Federica; Garrone, Federica; Terzini, Mara; Bignardi, Cristina; Aimetti, Mario
2017-06-01
The aim of this study was to present and validate a novel procedure for the quantitative volumetric assessment of extraction sockets that combines cone-beam computed tomography (CBCT) and image processing techniques. The CBCT dataset of 9 severely resorbed extraction sockets was analyzed by means of two image processing software, Image J and Mimics, using manual and automated segmentation techniques. They were also applied on 5-mm spherical aluminum markers of known volume and on a polyvinyl chloride model of one alveolar socket scanned with Micro-CT to test the accuracy. Statistical differences in alveolar socket volume were found between the different methods of volumetric analysis (P<0.0001). The automated segmentation using Mimics was the most reliable and accurate method with a relative error of 1.5%, considerably smaller than the error of 7% and of 10% introduced by the manual method using Mimics and by the automated method using ImageJ. The currently proposed automated segmentation protocol for the three-dimensional rendering of alveolar sockets showed more accurate results, excellent inter-observer similarity and increased user friendliness. The clinical application of this method enables a three-dimensional evaluation of extraction socket healing after the reconstructive procedures and during the follow-up visits.
Vellmer, Sebastian; Tonoyan, Aram S; Suter, Dieter; Pronin, Igor N; Maximov, Ivan I
2018-02-01
Diffusion magnetic resonance imaging (dMRI) is a powerful tool in clinical applications, in particular, in oncology screening. dMRI demonstrated its benefit and efficiency in the localisation and detection of different types of human brain tumours. Clinical dMRI data suffer from multiple artefacts such as motion and eddy-current distortions, contamination by noise, outliers etc. In order to increase the image quality of the derived diffusion scalar metrics and the accuracy of the subsequent data analysis, various pre-processing approaches are actively developed and used. In the present work we assess the effect of different pre-processing procedures such as a noise correction, different smoothing algorithms and spatial interpolation of raw diffusion data, with respect to the accuracy of brain glioma differentiation. As a set of sensitive biomarkers of the glioma malignancy grades we chose the derived scalar metrics from diffusion and kurtosis tensor imaging as well as the neurite orientation dispersion and density imaging (NODDI) biophysical model. Our results show that the application of noise correction, anisotropic diffusion filtering, and cubic-order spline interpolation resulted in the highest sensitivity and specificity for glioma malignancy grading. Thus, these pre-processing steps are recommended for the statistical analysis in brain tumour studies. Copyright © 2017. Published by Elsevier GmbH.
The microcomputer in the dental office: a new diagnostic aid.
van der Stelt, P F
1985-06-01
The first computer applications in the dental office were based upon standard accountancy procedures. Recently, more and more computer applications have become available to meet the specific requirements of dental practice. This implies not only business procedures, but also facilities to store patient records in the system and retrieve them easily. Another development concerns the automatic calculation of diagnostic data such as those provided in cephalometric analysis. Furthermore, growth and surgical results in the craniofacial area can be predicted by computerized extrapolation. Computers have been useful in obtaining the patient's anamnestic data objectively and for the making of decisions based on such data. Computer-aided instruction systems have been developed for undergraduate students to bridge the gap between textbook and patient interaction without the risks inherent in the latter. Radiology will undergo substantial changes as a result of the application of electronic imaging devices instead of the conventional radiographic films. Computer-assisted electronic imaging will enable image processing, image enhancement, pattern recognition and data transmission for consultation and storage purposes. Image processing techniques will increase image quality whilst still allowing low-dose systems. Standardization of software and system configuration and the development of 'user friendly' programs is the major concern for the near future.
Navigation concepts for magnetic resonance imaging-guided musculoskeletal interventions.
Busse, Harald; Kahn, Thomas; Moche, Michael
2011-08-01
Image-guided musculoskeletal (MSK) interventions are a widely used alternative to open surgical procedures for various pathological findings in different body regions. They traditionally involve one of the established x-ray imaging techniques (radiography, fluoroscopy, computed tomography) or ultrasound scanning. Over the last decades, magnetic resonance imaging (MRI) has evolved into one of the most powerful diagnostic tools for nearly the whole body and has therefore been increasingly considered for interventional guidance as well.The strength of MRI for MSK applications is a combination of well-known general advantages, such as multiplanar and functional imaging capabilities, wide choice of tissue contrasts, and absence of ionizing radiation, as well as a number of MSK-specific factors, for example, the excellent depiction of soft-tissue tumors, nonosteolytic bone changes, and bone marrow lesions. On the downside, the magnetic resonance-compatible equipment needed, restricted space in the magnet, longer imaging times, and the more complex workflow have so far limited the number of MSK procedures under MRI guidance.Navigation solutions are generally a natural extension of any interventional imaging system, in particular, because powerful hardware and software for image processing have become routinely available. They help to identify proper access paths, provide accurate feedback on the instrument positions, facilitate the workflow in an MRI environment, and ultimately contribute to procedural safety and success.The purposes of this work were to describe some basic concepts and devices for MRI guidance of MSK procedures and to discuss technical and clinical achievements and challenges for some selected implementations.
Retinal imaging analysis based on vessel detection.
Jamal, Arshad; Hazim Alkawaz, Mohammed; Rehman, Amjad; Saba, Tanzila
2017-07-01
With an increase in the advancement of digital imaging and computing power, computationally intelligent technologies are in high demand to be used in ophthalmology cure and treatment. In current research, Retina Image Analysis (RIA) is developed for optometrist at Eye Care Center in Management and Science University. This research aims to analyze the retina through vessel detection. The RIA assists in the analysis of the retinal images and specialists are served with various options like saving, processing and analyzing retinal images through its advanced interface layout. Additionally, RIA assists in the selection process of vessel segment; processing these vessels by calculating its diameter, standard deviation, length, and displaying detected vessel on the retina. The Agile Unified Process is adopted as the methodology in developing this research. To conclude, Retina Image Analysis might help the optometrist to get better understanding in analyzing the patient's retina. Finally, the Retina Image Analysis procedure is developed using MATLAB (R2011b). Promising results are attained that are comparable in the state of art. © 2017 Wiley Periodicals, Inc.
A survey of GPU-based medical image computing techniques
Shi, Lin; Liu, Wen; Zhang, Heye; Xie, Yongming
2012-01-01
Medical imaging currently plays a crucial role throughout the entire clinical applications from medical scientific research to diagnostics and treatment planning. However, medical imaging procedures are often computationally demanding due to the large three-dimensional (3D) medical datasets to process in practical clinical applications. With the rapidly enhancing performances of graphics processors, improved programming support, and excellent price-to-performance ratio, the graphics processing unit (GPU) has emerged as a competitive parallel computing platform for computationally expensive and demanding tasks in a wide range of medical image applications. The major purpose of this survey is to provide a comprehensive reference source for the starters or researchers involved in GPU-based medical image processing. Within this survey, the continuous advancement of GPU computing is reviewed and the existing traditional applications in three areas of medical image processing, namely, segmentation, registration and visualization, are surveyed. The potential advantages and associated challenges of current GPU-based medical imaging are also discussed to inspire future applications in medicine. PMID:23256080
Documentation of procedures for textural/spatial pattern recognition techniques
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bryant, W. F.
1976-01-01
A C-130 aircraft was flown over the Sam Houston National Forest on March 21, 1973 at 10,000 feet altitude to collect multispectral scanner (MSS) data. Existing textural and spatial automatic processing techniques were used to classify the MSS imagery into specified timber categories. Several classification experiments were performed on this data using features selected from the spectral bands and a textural transform band. The results indicate that (1) spatial post-processing a classified image can cut the classification error to 1/2 or 1/3 of its initial value, (2) spatial post-processing the classified image using combined spectral and textural features produces a resulting image with less error than post-processing a classified image using only spectral features and (3) classification without spatial post processing using the combined spectral textural features tends to produce about the same error rate as a classification without spatial post processing using only spectral features.
Automatic delineation of brain regions on MRI and PET images from the pig.
Villadsen, Jonas; Hansen, Hanne D; Jørgensen, Louise M; Keller, Sune H; Andersen, Flemming L; Petersen, Ida N; Knudsen, Gitte M; Svarer, Claus
2018-01-15
The increasing use of the pig as a research model in neuroimaging requires standardized processing tools. For example, extraction of regional dynamic time series from brain PET images requires parcellation procedures that benefit from being automated. Manual inter-modality spatial normalization to a MRI atlas is operator-dependent, time-consuming, and can be inaccurate with lack of cortical radiotracer binding or skull uptake. A parcellated PET template that allows for automatic spatial normalization to PET images of any radiotracer. MRI and [ 11 C]Cimbi-36 PET scans obtained in sixteen pigs made the basis for the atlas. The high resolution MRI scans allowed for creation of an accurately averaged MRI template. By aligning the within-subject PET scans to their MRI counterparts, an averaged PET template was created in the same space. We developed an automatic procedure for spatial normalization of the averaged PET template to new PET images and hereby facilitated transfer of the atlas regional parcellation. Evaluation of the automatic spatial normalization procedure found the median voxel displacement to be 0.22±0.08mm using the MRI template with individual MRI images and 0.92±0.26mm using the PET template with individual [ 11 C]Cimbi-36 PET images. We tested the automatic procedure by assessing eleven PET radiotracers with different kinetics and spatial distributions by using perfusion-weighted images of early PET time frames. We here present an automatic procedure for accurate and reproducible spatial normalization and parcellation of pig PET images of any radiotracer with reasonable blood-brain barrier penetration. Copyright © 2017 Elsevier B.V. All rights reserved.
Sonification of optical coherence tomography data and images
Ahmad, Adeel; Adie, Steven G.; Wang, Morgan; Boppart, Stephen A.
2010-01-01
Sonification is the process of representing data as non-speech audio signals. In this manuscript, we describe the auditory presentation of OCT data and images. OCT acquisition rates frequently exceed our ability to visually analyze image-based data, and multi-sensory input may therefore facilitate rapid interpretation. This conversion will be especially valuable in time-sensitive surgical or diagnostic procedures. In these scenarios, auditory feedback can complement visual data without requiring the surgeon to constantly monitor the screen, or provide additional feedback in non-imaging procedures such as guided needle biopsies which use only axial-scan data. In this paper we present techniques to translate OCT data and images into sound based on the spatial and spatial frequency properties of the OCT data. Results obtained from parameter-mapped sonification of human adipose and tumor tissues are presented, indicating that audio feedback of OCT data may be useful for the interpretation of OCT images. PMID:20588846
Algorithm-Based Motion Magnification for Video Processing in Urological Laparoscopy.
Adams, Fabian; Schoelly, Reto; Schlager, Daniel; Schoenthaler, Martin; Schoeb, Dominik S; Wilhelm, Konrad; Hein, Simon; Wetterauer, Ulrich; Miernik, Arkadiusz
2017-06-01
Minimally invasive surgery is in constant further development and has replaced many conventional operative procedures. If vascular structure movement could be detected during these procedures, it could reduce the risk of vascular injury and conversion to open surgery. The recently proposed motion-amplifying algorithm, Eulerian Video Magnification (EVM), has been shown to substantially enhance minimal object changes in digitally recorded video that is barely perceptible to the human eye. We adapted and examined this technology for use in urological laparoscopy. Video sequences of routine urological laparoscopic interventions were recorded and further processed using spatial decomposition and filtering algorithms. The freely available EVM algorithm was investigated for its usability in real-time processing. In addition, a new image processing technology, the CRS iimotion Motion Magnification (CRSMM) algorithm, was specifically adjusted for endoscopic requirements, applied, and validated by our working group. Using EVM, no significant motion enhancement could be detected without severe impairment of the image resolution, motion, and color presentation. The CRSMM algorithm significantly improved image quality in terms of motion enhancement. In particular, the pulsation of vascular structures could be displayed more accurately than in EVM. Motion magnification image processing technology has the potential for clinical importance as a video optimizing modality in endoscopic and laparoscopic surgery. Barely detectable (micro)movements can be visualized using this noninvasive marker-free method. Despite these optimistic results, the technology requires considerable further technical development and clinical tests.
Full image-processing pipeline in field-programmable gate array for a small endoscopic camera
NASA Astrophysics Data System (ADS)
Mostafa, Sheikh Shanawaz; Sousa, L. Natércia; Ferreira, Nuno Fábio; Sousa, Ricardo M.; Santos, Joao; Wäny, Martin; Morgado-Dias, F.
2017-01-01
Endoscopy is an imaging procedure used for diagnosis as well as for some surgical purposes. The camera used for the endoscopy should be small and able to produce a good quality image or video, to reduce discomfort of the patients, and to increase the efficiency of the medical team. To achieve these fundamental goals, a small endoscopy camera with a footprint of 1 mm×1 mm×1.65 mm is used. Due to the physical properties of the sensors and human vision system limitations, different image-processing algorithms, such as noise reduction, demosaicking, and gamma correction, among others, are needed to faithfully reproduce the image or video. A full image-processing pipeline is implemented using a field-programmable gate array (FPGA) to accomplish a high frame rate of 60 fps with minimum processing delay. Along with this, a viewer has also been developed to display and control the image-processing pipeline. The control and data transfer are done by a USB 3.0 end point in the computer. The full developed system achieves real-time processing of the image and fits in a Xilinx Spartan-6LX150 FPGA.
A novel Kalman filter based video image processing scheme for two-photon fluorescence microscopy
NASA Astrophysics Data System (ADS)
Sun, Wenqing; Huang, Xia; Li, Chunqiang; Xiao, Chuan; Qian, Wei
2016-03-01
Two-photon fluorescence microscopy (TPFM) is a perfect optical imaging equipment to monitor the interaction between fast moving viruses and hosts. However, due to strong unavoidable background noises from the culture, videos obtained by this technique are too noisy to elaborate this fast infection process without video image processing. In this study, we developed a novel scheme to eliminate background noises, recover background bacteria images and improve video qualities. In our scheme, we modified and implemented the following methods for both host and virus videos: correlation method, round identification method, tree-structured nonlinear filters, Kalman filters, and cell tracking method. After these procedures, most of noises were eliminated and host images were recovered with their moving directions and speed highlighted in the videos. From the analysis of the processed videos, 93% bacteria and 98% viruses were correctly detected in each frame on average.
NASA Astrophysics Data System (ADS)
Wang, Jiaoyang; Wang, Lin; Yang, Ying; Gong, Rui; Shao, Xiaopeng; Liang, Chao; Xu, Jun
2016-05-01
In this paper, an integral design that combines optical system with image processing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of image processing algorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for image processing. In conclusion. The integral design of optical system and image processing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Automating PACS quality control with the Vanderbilt image processing enterprise resource
NASA Astrophysics Data System (ADS)
Esparza, Michael L.; Welch, E. Brian; Landman, Bennett A.
2012-02-01
Precise image acquisition is an integral part of modern patient care and medical imaging research. Periodic quality control using standardized protocols and phantoms ensures that scanners are operating according to specifications, yet such procedures do not ensure that individual datasets are free from corruption; for example due to patient motion, transient interference, or physiological variability. If unacceptable artifacts are noticed during scanning, a technologist can repeat a procedure. Yet, substantial delays may be incurred if a problematic scan is not noticed until a radiologist reads the scans or an automated algorithm fails. Given scores of slices in typical three-dimensional scans and widevariety of potential use cases, a technologist cannot practically be expected inspect all images. In large-scale research, automated pipeline systems have had great success in achieving high throughput. However, clinical and institutional workflows are largely based on DICOM and PACS technologies; these systems are not readily compatible with research systems due to security and privacy restrictions. Hence, quantitative quality control has been relegated to individual investigators and too often neglected. Herein, we propose a scalable system, the Vanderbilt Image Processing Enterprise Resource (VIPER) to integrate modular quality control and image analysis routines with a standard PACS configuration. This server unifies image processing routines across an institutional level and provides a simple interface so that investigators can collaborate to deploy new analysis technologies. VIPER integrates with high performance computing environments has successfully analyzed all standard scans from our institutional research center over the course of the last 18 months.
Cryo-balloon catheter localization in fluoroscopic images
NASA Astrophysics Data System (ADS)
Kurzendorfer, Tanja; Brost, Alexander; Jakob, Carolin; Mewes, Philip W.; Bourier, Felix; Koch, Martin; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert
2013-03-01
Minimally invasive catheter ablation has become the preferred treatment option for atrial fibrillation. Although the standard ablation procedure involves ablation points set by radio-frequency catheters, cryo-balloon catheters have even been reported to be more advantageous in certain cases. As electro-anatomical mapping systems do not support cryo-balloon ablation procedures, X-ray guidance is needed. However, current methods to provide support for cryo-balloon catheters in fluoroscopically guided ablation procedures rely heavily on manual user interaction. To improve this, we propose a first method for automatic cryo-balloon catheter localization in fluoroscopic images based on a blob detection algorithm. Our method is evaluated on 24 clinical images from 17 patients. The method successfully detected the cryoballoon in 22 out of 24 images, yielding a success rate of 91.6 %. The successful localization achieved an accuracy of 1.00 mm +/- 0.44 mm. Even though our methods currently fails in 8.4 % of the images available, it still offers a significant improvement over manual methods. Furthermore, detecting a landmark point along the cryo-balloon catheter can be a very important step for additional post-processing operations.
Steerable Principal Components for Space-Frequency Localized Images*
Landa, Boris; Shkolnisky, Yoel
2017-01-01
As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879
Interactive boundary delineation of agricultural lands using graphics workstations
NASA Technical Reports Server (NTRS)
Cheng, Thomas D.; Angelici, Gary L.; Slye, Robert E.; Ma, Matt
1992-01-01
A review is presented of the computer-assisted stratification and sampling (CASS) system developed to delineate the boundaries of sample units for survey procedures. CASS stratifies the sampling units by land-cover and land-use type, employing image-processing software and hardware. This procedure generates coverage areas and the boundaries of stratified sampling units that are utilized for subsequent sampling procedures from which agricultural statistics are developed.
Mihaylova, Milena; Manahilov, Velitchko
2010-11-24
Research has shown that the processing time for discriminating illusory contours is longer than for real contours. We know, however, little whether the visual processes, associated with detecting regions of illusory surfaces, are also slower as those responsible for detecting luminance-defined images. Using a speed-accuracy trade-off (SAT) procedure, we measured accuracy as a function of processing time for detecting illusory Kanizsa-type and luminance-defined squares embedded in 2D static luminance noise. The data revealed that the illusory images were detected at slower processing speed than the real images, while the points in time, when accuracy departed from chance, were not significantly different for both stimuli. The classification images for detecting illusory and real squares showed that observers employed similar detection strategies using surface regions of the real and illusory squares. The lack of significant differences between the x-intercepts of the SAT functions for illusory and luminance-modulated stimuli suggests that the detection of surface regions of both images could be based on activation of a single mechanism (the dorsal magnocellular visual pathway). The slower speed for detecting illusory images as compared to luminance-defined images could be attributed to slower processes of filling-in of regions of illusory images within the dorsal pathway.
Efficacy of a novel IGS system in atrial septal defect repair
NASA Astrophysics Data System (ADS)
Mefleh, Fuad N.; Baker, G. Hamilton; Kwartowitz, David M.
2013-03-01
Congenital heart disease occurs in 107.6 out of 10,000 live births, with Atrial Septal Defects (ASD) accounting for 10% of these conditions. Historically, ASDs were treated with open heart surgery using cardiopulmonary bypass, allowing a patch to be sewn over the defect. In 1976, King et al. demonstrated use of a transcatheter occlusion procedure, thus reducing the invasiveness of ASD repair. Localization during these catheter based procedures traditionally has relied on bi-plane fluoroscopy; more recently trans-esophageal echocardiography (TEE) and intra-cardiac echocardiography (ICE) have been used to navigate these procedures. Although there is a high success rate using the transcatheter occlusion procedure, fluoroscopy poses radiation dose risk to both patient and clinician. The impact of this dose to the patients is important as many of those undergoing this procedure are children, who have an increased risk associated with radiation exposure. Their longer life expectancy than adults provides a larger window of opportunity for expressing the damaging effects of ionizing radiation. In addition, epidemiologic studies of exposed populations have demonstrated that children are considerably more sensitive to the carcinogenic effects radiation. Image-guided surgery (IGS) uses pre-operative and intra-operative images to guide surgery or an interventional procedure. Central to every IGS system is a software application capable of processing and displaying patient images, registration between multiple coordinate systems, and interfacing with a tool tracking system. We have developed a novel image-guided surgery framework called Kit for Navigation by Image Focused Exploration (KNIFE). In this work we assess the efficacy of this image-guided navigation system for ASD repair using a series of mock clinical experiments designed to simulate ASD repair device deployment.
Interventional spinal procedures guided and controlled by a 3D rotational angiographic unit.
Pedicelli, Alessandro; Verdolotti, Tommaso; Pompucci, Angelo; Desiderio, Flora; D'Argento, Francesco; Colosimo, Cesare; Bonomo, Lorenzo
2011-12-01
The aim of this paper is to demonstrate the usefulness of 2D multiplanar reformatting images (MPR) obtained from rotational acquisitions with cone-beam computed tomography technology during percutaneous extra-vascular spinal procedures performed in the angiography suite. We used a 3D rotational angiographic unit with a flat panel detector. MPR images were obtained from a rotational acquisition of 8 s (240 images at 30 fps), tube rotation of 180° and after post-processing of 5 s by a local work-station. Multislice CT (MSCT) is the best guidance system for spinal approaches permitting direct tomographic visualization of each spinal structure. Many operators, however, are trained with fluoroscopy, it is less expensive, allows real-time guidance, and in many centers the angiography suite is more frequently available for percutaneous procedures. We present our 6-year experience in fluoroscopy-guided spinal procedures, which were performed under different conditions using MPR images. We illustrate cases of vertebroplasty, epidural injections, selective foraminal nerve root block, facet block, percutaneous treatment of disc herniation and spine biopsy, all performed with the help of MPR images for guidance and control in the event of difficult or anatomically complex access. The integrated use of "CT-like" MPR images allows the execution of spinal procedures under fluoroscopy guidance alone in all cases of dorso-lumbar access, with evident limitation of risks and complications, and without need for recourse to MSCT guidance, thus eliminating CT-room time (often bearing high diagnostic charges), and avoiding organizational problems for procedures that need, for example, combined use of a C-arm in the CT room.
NASA Astrophysics Data System (ADS)
Akiyama, Akira; Mutoh, Eiichiro; Kumagai, Hideo
2014-09-01
We have developed the stereo matching image processing by synthesized color and the corresponding area by the synthesized color for ranging the object and image recognition. The typical images from a pair of the stereo imagers may have some image disagreement each other due to the size change, missed place, appearance change and deformation of characteristic area. We constructed the synthesized color and corresponding color area with the same synthesized color to make the distinct stereo matching. We constructed the synthesized color and corresponding color area with the same synthesized color by the 3 steps. The first step is making binary edge image by differentiating the focused image from each imager and verifying that differentiated image has normal density of frequency distribution to find the threshold level of binary procedure. We used Daubechies wavelet transformation for the procedures of differentiating in this study. The second step is deriving the synthesized color by averaging color brightness between binary edge points with respect to horizontal direction and vertical direction alternatively. The averaging color procedure was done many times until the fluctuation of averaged color become negligible with respect to 256 levels in brightness. The third step is extracting area with same synthesized color by collecting the pixel of same synthesized color and grouping these pixel points by 4 directional connectivity relations. The matching areas for the stereo matching are determined by using synthesized color areas. The matching point is the center of gravity of each synthesized color area. The parallax between a pair of images is derived by the center of gravity of synthesized color area easily. The experiment of this stereo matching was done for the object of the soccer ball toy. From this experiment we showed that stereo matching by the synthesized color technique are simple and effective.
Segmentation of bone and soft tissue regions in digital radiographic images of extremities
NASA Astrophysics Data System (ADS)
Pakin, S. Kubilay; Gaborski, Roger S.; Barski, Lori L.; Foos, David H.; Parker, Kevin J.
2001-07-01
This paper presents an algorithm for segmentation of computed radiography (CR) images of extremities into bone and soft tissue regions. The algorithm is a region-based one in which the regions are constructed using a growing procedure with two different statistical tests. Following the growing process, tissue classification procedure is employed. The purpose of the classification is to label each region as either bone or soft tissue. This binary classification goal is achieved by using a voting procedure that consists of clustering of regions in each neighborhood system into two classes. The voting procedure provides a crucial compromise between local and global analysis of the image, which is necessary due to strong exposure variations seen on the imaging plate. Also, the existence of regions whose size is large enough such that exposure variations can be observed through them makes it necessary to use overlapping blocks during the classification. After the classification step, resulting bone and soft tissue regions are refined by fitting a 2nd order surface to each tissue, and reevaluating the label of each region according to the distance between the region and surfaces. The performance of the algorithm is tested on a variety of extremity images using manually segmented images as gold standard. The experiments showed that our algorithm provided a bone boundary with an average area overlap of 90% compared to the gold standard.
PI2GIS: processing image to geographical information systems, a learning tool for QGIS
NASA Astrophysics Data System (ADS)
Correia, R.; Teodoro, A.; Duarte, L.
2017-10-01
To perform an accurate interpretation of remote sensing images, it is necessary to extract information using different image processing techniques. Nowadays, it became usual to use image processing plugins to add new capabilities/functionalities integrated in Geographical Information System (GIS) software. The aim of this work was to develop an open source application to automatically process and classify remote sensing images from a set of satellite input data. The application was integrated in a GIS software (QGIS), automating several image processing steps. The use of QGIS for this purpose is justified since it is easy and quick to develop new plugins, using Python language. This plugin is inspired in the Semi-Automatic Classification Plugin (SCP) developed by Luca Congedo. SCP allows the supervised classification of remote sensing images, the calculation of vegetation indices such as NDVI (Normalized Difference Vegetation Index) and EVI (Enhanced Vegetation Index) and other image processing operations. When analysing SCP, it was realized that a set of operations, that are very useful in teaching classes of remote sensing and image processing tasks, were lacking, such as the visualization of histograms, the application of filters, different image corrections, unsupervised classification and several environmental indices computation. The new set of operations included in the PI2GIS plugin can be divided into three groups: pre-processing, processing, and classification procedures. The application was tested consider an image from Landsat 8 OLI from a North area of Portugal.
Fixed-Cell Imaging of Schizosaccharomyces pombe.
Hagan, Iain M; Bagley, Steven
2016-07-01
The acknowledged genetic malleability of fission yeast has been matched by impressive cytology to drive major advances in our understanding of basic molecular cell biological processes. In many of the more recent studies, traditional approaches of fixation followed by processing to accommodate classical staining procedures have been superseded by live-cell imaging approaches that monitor the distribution of fusion proteins between a molecule of interest and a fluorescent protein. Although such live-cell imaging is uniquely informative for many questions, fixed-cell imaging remains the better option for others and is an important-sometimes critical-complement to the analysis of fluorescent fusion proteins by live-cell imaging. Here, we discuss the merits of fixed- and live-cell imaging as well as specific issues for fluorescence microscopy imaging of fission yeast. © 2016 Cold Spring Harbor Laboratory Press.
3D X-Ray Nanotomography of Cells Grown on Electrospun Scaffolds.
Bradley, Robert S; Robinson, Ian K; Yusuf, Mohammed
2017-02-01
Here, it is demonstrated that X-ray nanotomography with Zernike phase contrast can be used for 3D imaging of cells grown on electrospun polymer scaffolds. The scaffold fibers and cells are simultaneously imaged, enabling the influence of scaffold architecture on cell location and morphology to be studied. The high resolution enables subcellular details to be revealed. The X-ray imaging conditions were optimized to reduce scan times, making it feasible to scan multiple regions of interest in relatively large samples. An image processing procedure is presented which enables scaffold characteristics and cell location to be quantified. The procedure is demonstrated by comparing the ingrowth of cells after culture for 3 and 6 days. © 2016 The Authors. Published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A learning tool for optical and microwave satellite image processing and analysis
NASA Astrophysics Data System (ADS)
Dashondhi, Gaurav K.; Mohanty, Jyotirmoy; Eeti, Laxmi N.; Bhattacharya, Avik; De, Shaunak; Buddhiraju, Krishna M.
2016-04-01
This paper presents a self-learning tool, which contains a number of virtual experiments for processing and analysis of Optical/Infrared and Synthetic Aperture Radar (SAR) images. The tool is named Virtual Satellite Image Processing and Analysis Lab (v-SIPLAB) Experiments that are included in Learning Tool are related to: Optical/Infrared - Image and Edge enhancement, smoothing, PCT, vegetation indices, Mathematical Morphology, Accuracy Assessment, Supervised/Unsupervised classification etc.; Basic SAR - Parameter extraction and range spectrum estimation, Range compression, Doppler centroid estimation, Azimuth reference function generation and compression, Multilooking, image enhancement, texture analysis, edge and detection. etc.; SAR Interferometry - BaseLine Calculation, Extraction of single look SAR images, Registration, Resampling, and Interferogram generation; SAR Polarimetry - Conversion of AirSAR or Radarsat data to S2/C3/T3 matrix, Speckle Filtering, Power/Intensity image generation, Decomposition of S2/C3/T3, Classification of S2/C3/T3 using Wishart Classifier [3]. A professional quality polarimetric SAR software can be found at [8], a part of whose functionality can be found in our system. The learning tool also contains other modules, besides executable software experiments, such as aim, theory, procedure, interpretation, quizzes, link to additional reading material and user feedback. Students can have understanding of Optical and SAR remotely sensed images through discussion of basic principles and supported by structured procedure for running and interpreting the experiments. Quizzes for self-assessment and a provision for online feedback are also being provided to make this Learning tool self-contained. One can download results after performing experiments.
A methodology for evaluation of an interactive multispectral image processing system
NASA Technical Reports Server (NTRS)
Kovalick, William M.; Newcomer, Jeffrey A.; Wharton, Stephen W.
1987-01-01
Because of the considerable cost of an interactive multispectral image processing system, an evaluation of a prospective system should be performed to ascertain if it will be acceptable to the anticipated users. Evaluation of a developmental system indicated that the important system elements include documentation, user friendliness, image processing capabilities, and system services. The criteria and evaluation procedures for these elements are described herein. The following factors contributed to the success of the evaluation of the developmental system: (1) careful review of documentation prior to program development, (2) construction and testing of macromodules representing typical processing scenarios, (3) availability of other image processing systems for referral and verification, and (4) use of testing personnel with an applications perspective and experience with other systems. This evaluation was done in addition to and independently of program testing by the software developers of the system.
NASA Technical Reports Server (NTRS)
Hepner, George F.; Logan, Thomas; Ritter, Niles; Bryant, Nevin
1990-01-01
Recent research has shown an artificial neural network (ANN) to be capable of pattern recognition and the classification of image data. This paper examines the potential for the application of neural network computing to satellite image processing. A second objective is to provide a preliminary comparison and ANN classification. An artificial neural network can be trained to do land-cover classification of satellite imagery using selected sites representative of each class in a manner similar to conventional supervised classification. One of the major problems associated with recognition and classifications of pattern from remotely sensed data is the time and cost of developing a set of training sites. This reseach compares the use of an ANN back propagation classification procedure with a conventional supervised maximum likelihood classification procedure using a minimal training set. When using a minimal training set, the neural network is able to provide a land-cover classification superior to the classification derived from the conventional classification procedure. This research is the foundation for developing application parameters for further prototyping of software and hardware implementations for artificial neural networks in satellite image and geographic information processing.
Hajihosseini, Payman; Anzehaee, Mohammad Mousavi; Behnam, Behzad
2018-05-22
The early fault detection and isolation in industrial systems is a critical factor in preventing equipment damage. In the proposed method, instead of using the time signals of sensors, the 2D image obtained by placing these signals next to each other in a matrix has been used; and then a novel fault detection and isolation procedure has been carried out based on image processing techniques. Different features including texture, wavelet transform, mean and standard deviation of the image accompanied with MLP and RBF neural networks based classifiers have been used for this purpose. Obtained results indicate the notable efficacy and success of the proposed method in detecting and isolating faults of the Tennessee Eastman benchmark process and its superiority over previous techniques. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Automated and unsupervised detection of malarial parasites in microscopic images.
Purwar, Yashasvi; Shah, Sirish L; Clarke, Gwen; Almugairi, Areej; Muehlenbachs, Atis
2011-12-13
Malaria is a serious infectious disease. According to the World Health Organization, it is responsible for nearly one million deaths each year. There are various techniques to diagnose malaria of which manual microscopy is considered to be the gold standard. However due to the number of steps required in manual assessment, this diagnostic method is time consuming (leading to late diagnosis) and prone to human error (leading to erroneous diagnosis), even in experienced hands. The focus of this study is to develop a robust, unsupervised and sensitive malaria screening technique with low material cost and one that has an advantage over other techniques in that it minimizes human reliance and is, therefore, more consistent in applying diagnostic criteria. A method based on digital image processing of Giemsa-stained thin smear image is developed to facilitate the diagnostic process. The diagnosis procedure is divided into two parts; enumeration and identification. The image-based method presented here is designed to automate the process of enumeration and identification; with the main advantage being its ability to carry out the diagnosis in an unsupervised manner and yet have high sensitivity and thus reducing cases of false negatives. The image based method is tested over more than 500 images from two independent laboratories. The aim is to distinguish between positive and negative cases of malaria using thin smear blood slide images. Due to the unsupervised nature of method it requires minimal human intervention thus speeding up the whole process of diagnosis. Overall sensitivity to capture cases of malaria is 100% and specificity ranges from 50-88% for all species of malaria parasites. Image based screening method will speed up the whole process of diagnosis and is more advantageous over laboratory procedures that are prone to errors and where pathological expertise is minimal. Further this method provides a consistent and robust way of generating the parasite clearance curves.
Toward a perceptual image quality assessment of color quantized images
NASA Astrophysics Data System (ADS)
Frackiewicz, Mariusz; Palus, Henryk
2018-04-01
Color image quantization is an important operation in the field of color image processing. In this paper, we consider new perceptual image quality metrics for assessment of quantized images. These types of metrics, e.g. DSCSI, MDSIs, MDSIm and HPSI achieve the highest correlation coefficients with MOS during tests on the six publicly available image databases. Research was limited to images distorted by two types of compression: JPG and JPG2K. Statistical analysis of correlation coefficients based on the Friedman test and post-hoc procedures showed that the differences between the four new perceptual metrics are not statistically significant.
Coltelli, Primo; Barsanti, Laura; Evangelista, Valter; Frassanito, Anna Maria; Gualtieri, Paolo
2016-12-01
A novel procedure for deriving the absorption spectrum of an object spot from the colour values of the corresponding pixel(s) in its image is presented. Any digital image acquired by a microscope can be used; typical applications are the analysis of cellular/subcellular metabolic processes under physiological conditions and in response to environmental stressors (e.g. heavy metals), and the measurement of chromophore composition, distribution and concentration in cells. In this paper, we challenged the procedure with images of algae, acquired by means of a CCD camera mounted onto a microscope. The many colours algae display result from the combinations of chromophores whose spectroscopic information is limited to organic solvents extracts that suffers from displacements, amplifications, and contraction/dilatation respect to spectra recorded inside the cell. Hence, preliminary processing is necessary, which consists of in vivo measurement of the absorption spectra of photosynthetic compartments of algal cells and determination of spectra of the single chromophores inside the cell. The final step of the procedure consists in the reconstruction of the absorption spectrum of the cell spot from the colour values of the corresponding pixel(s) in its digital image by minimization of a system of transcendental equations based on the absorption spectra of the chromophores under physiological conditions. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Leong, Siow Hoo; Ong, Seng Huat
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index.
Leong, Siow Hoo
2017-01-01
This paper considers three crucial issues in processing scaled down image, the representation of partial image, similarity measure and domain adaptation. Two Gaussian mixture model based algorithms are proposed to effectively preserve image details and avoids image degradation. Multiple partial images are clustered separately through Gaussian mixture model clustering with a scan and select procedure to enhance the inclusion of small image details. The local image features, represented by maximum likelihood estimates of the mixture components, are classified by using the modified Bayes factor (MBF) as a similarity measure. The detection of novel local features from MBF will suggest domain adaptation, which is changing the number of components of the Gaussian mixture model. The performance of the proposed algorithms are evaluated with simulated data and real images and it is shown to perform much better than existing Gaussian mixture model based algorithms in reproducing images with higher structural similarity index. PMID:28686634
The Viking Mosaic Catalog, Volume 2
NASA Technical Reports Server (NTRS)
Evans, N.
1982-01-01
A collection of more than 500 mosaics prepared from Viking Orbiter images is given. Accompanying each mosaic is a footprint plot, which identifies by location, picture number, and order number, each frame in the mosaic. Corner coordinates and pertinent imaging information are also included. A short text provides the camera characteristics, image format, and data processing information necessary for using the mosaic plates as a research aide. Procedures for ordering mosaic enlargements and individual images are also provided.
Pre- and Postoperative Imaging of the Aortic Root
Chan, Frandics P.; Mitchell, R. Scott; Miller, D. Craig; Fleischmann, Dominik
2016-01-01
Three-dimensional datasets acquired using computed tomography and magnetic resonance imaging are ideally suited for characterization of the aortic root. These modalities offer different advantages and limitations, which must be weighed according to the clinical context. This article provides an overview of current aortic root imaging, highlighting normal anatomy, pathologic conditions, imaging techniques, measurement thresholds, relevant surgical procedures, postoperative complications and potential imaging pitfalls. Patients with a range of clinical conditions are predisposed to aortic root disease, including Marfan syndrome, bicuspid aortic valve, vascular Ehlers-Danlos syndrome, and Loeys-Dietz syndrome. Various surgical techniques may be used to repair the aortic root, including placement of a composite valve graft, such as the Bentall and Cabrol procedures; placement of an aortic root graft with preservation of the native valve, such as the Yacoub and David techniques; and implantation of a biologic graft, such as a homograft, autograft, or xenograft. Potential imaging pitfalls in the postoperative period include mimickers of pathologic processes such as felt pledgets, graft folds, and nonabsorbable hemostatic agents. Postoperative complications that may be encountered include pseudoaneurysms, infection, and dehiscence. Radiologists should be familiar with normal aortic root anatomy, surgical procedures, and postoperative complications, to accurately interpret pre- and postoperative imaging performed for evaluation of the aortic root. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26761529
Automation of Cassini Support Imaging Uplink Command Development
NASA Technical Reports Server (NTRS)
Ly-Hollins, Lisa; Breneman, Herbert H.; Brooks, Robert
2010-01-01
"Support imaging" is imagery requested by other Cassini science teams to aid in the interpretation of their data. The generation of the spacecraft command sequences for these images is performed by the Cassini Instrument Operations Team. The process initially established for doing this was very labor-intensive, tedious and prone to human error. Team management recognized this process as one that could easily benefit from automation. Team members were tasked to document the existing manual process, develop a plan and strategy to automate the process, implement the plan and strategy, test and validate the new automated process, and deliver the new software tools and documentation to Flight Operations for use during the Cassini extended mission. In addition to the goals of higher efficiency and lower risk in the processing of support imaging requests, an effort was made to maximize adaptability of the process to accommodate uplink procedure changes and the potential addition of new capabilities outside the scope of the initial effort.
36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?
Code of Federal Regulations, 2014 CFR
2014-07-01
... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...
36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?
Code of Federal Regulations, 2011 CFR
2011-07-01
... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...
36 CFR 1238.14 - What are the microfilming requirements for permanent and unscheduled records?
Code of Federal Regulations, 2012 CFR
2012-07-01
... accordance with ISO 18901 (incorporated by reference, see § 1238.5) and use the processing procedures in ANSI... § 1238.5). (2) Background density of images. Agencies must use the background ISO standard visual diffuse... transmission density. (i) Recommended visual diffuse transmission background densities for images of documents...
Optical disk processing of solar images.
NASA Astrophysics Data System (ADS)
Title, A.; Tarbell, T.
The current generation of space and ground-based experiments in solar physics produces many megabyte-sized image data arrays. Optical disk technology is the leading candidate for convenient analysis, distribution, and archiving of these data. The authors have been developing data analysis procedures which use both analog and digital optical disks for the study of solar phenomena.
Amateur Image Pipeline Processing using Python plus PyRAF
NASA Astrophysics Data System (ADS)
Green, Wayne
2012-05-01
A template pipeline spanning observing planning to publishing is offered as a basis for establishing a long term observing program. The data reduction pipeline encapsulates all policy and procedures, providing an accountable framework for data analysis and a teaching framework for IRAF. This paper introduces the technical details of a complete pipeline processing environment using Python, PyRAF and a few other languages. The pipeline encapsulates all processing decisions within an auditable framework. The framework quickly handles the heavy lifting of image processing. It also serves as an excellent teaching environment for astronomical data management and IRAF reduction decisions.
CRT image recording evaluation
NASA Technical Reports Server (NTRS)
1971-01-01
Performance capabilities and limitations of a fiber optic coupled line scan CRT image recording system were investigated. The test program evaluated the following components: (1). P31 phosphor CRT with EMA faceplate; (2). P31 phosphor CRT with clear clad faceplate; (3). Type 7743 semi-gloss dry process positive print paper; (4). Type 777 flat finish dry process positive print paper; (5). Type 7842 dry process positive film; and (6). Type 1971 semi-gloss wet process positive print paper. Detailed test procedures used in each test are provided along with a description of each test, the test data, and an analysis of the results.
Divers-Operated Underwater Photogrammetry: Applications in the Study of Antarctic Benthos
NASA Astrophysics Data System (ADS)
Piazza, P.; Cummings, V.; Lohrer, D.; Marini, S.; Marriott, P.; Menna, F.; Nocerino, E.; Peirano, A.; Schiaparelli, S.
2018-05-01
Ecological studies about marine benthic communities received a major leap from the application of a variety of non-destructive sampling and mapping techniques based on underwater image and video recording. The well-established scientific diving practice consists in the acquisition of single path or `round-trip' over elongated transects, with the imaging device oriented in a nadir looking direction. As it may be expected, the application of automatic image processing procedures to data not specifically acquired for 3D modelling can be risky, especially if proper tools for assessing the quality of the produced results are not employed. This paper, born from an international cooperation, focuses on this topic, which is of great interest for ecological and monitoring benthic studies in Antarctica. Several video footages recorded from different scientific teams in different years are processed with an automatic photogrammetric procedure and salient statistical features are reported to critically analyse the derived results. As expected, the inclusion of oblique images from additional lateral strips may improve the expected accuracy in the object space, without altering too much the current video recording practices.
Neural network face recognition using wavelets
NASA Astrophysics Data System (ADS)
Karunaratne, Passant V.; Jouny, Ismail I.
1997-04-01
The recognition of human faces is a phenomenon that has been mastered by the human visual system and that has been researched extensively in the domain of computer neural networks and image processing. This research is involved in the study of neural networks and wavelet image processing techniques in the application of human face recognition. The objective of the system is to acquire a digitized still image of a human face, carry out pre-processing on the image as required, an then, given a prior database of images of possible individuals, be able to recognize the individual in the image. The pre-processing segment of the system includes several procedures, namely image compression, denoising, and feature extraction. The image processing is carried out using Daubechies wavelets. Once the images have been passed through the wavelet-based image processor they can be efficiently analyzed by means of a neural network. A back- propagation neural network is used for the recognition segment of the system. The main constraints of the system is with regard to the characteristics of the images being processed. The system should be able to carry out effective recognition of the human faces irrespective of the individual's facial-expression, presence of extraneous objects such as head-gear or spectacles, and face/head orientation. A potential application of this face recognition system would be as a secondary verification method in an automated teller machine.
On-line range images registration with GPGPU
NASA Astrophysics Data System (ADS)
Będkowski, J.; Naruniec, J.
2013-03-01
This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.
USDA-ARS?s Scientific Manuscript database
Contamination of food with pathogenic bacteria can lead to foodborne illnesses. Food processing surfaces can serve as a medium for cross-contamination if sanitization procedures are inadequate. Ensuring that food processing surfaces are correctly cleaned and sanitized is important in the food indust...
Teaching People and Machines to Enhance Images
NASA Astrophysics Data System (ADS)
Berthouzoz, Floraine Sara Martianne
Procedural tasks such as following a recipe or editing an image are very common. They require a person to execute a sequence of operations (e.g. chop onions, or sharpen the image) in order to achieve the goal of the task. People commonly use step-by-step tutorials to learn these tasks. We focus on software tutorials, more specifically photo manipulation tutorials, and present a set of tools and techniques to help people learn, compare and automate photo manipulation procedures. We describe three different systems that are each designed to help with a different stage in acquiring procedural knowledge. Today, people primarily rely on hand-crafted tutorials in books and on websites to learn photo manipulation procedures. However, putting together a high quality step-by-step tutorial is a time-consuming process. As a consequence, many online tutorials are poorly designed which can lead to confusion and slow down the learning process. We present a demonstration-based system for automatically generating succinct step-by-step visual tutorials of photo manipulations. An author first demonstrates the manipulation using an instrumented version of GIMP (GNU Image Manipulation Program) that records all changes in interface and application state. From the example recording, our system automatically generates tutorials that illustrate the manipulation using images, text, and annotations. It leverages automated image labeling (recognition of facial features and outdoor scene structures in our implementation) to generate more precise text descriptions of many of the steps in the tutorials. A user study finds that our tutorials are effective for learning the steps of a procedure; users are 20-44% faster and make 60-95% fewer errors when using our tutorials than when using screencapture video tutorials or hand-designed tutorials. We also demonstrate a new interface that allows learners to navigate, explore and compare large collections (i.e. thousands) of photo manipulation tutorials based on their command-level structure. Sites such as tutorialized.com or good-tutorials.com collect tens of thousands of photo manipulation tutorials. These collections typically contain many different tutorials for the same task. For example, there are many different tutorials that describe how to recolor the hair of a person in an image. Learners often want to compare these tutorials to understand the different ways a task can be done. They may also want to identify common strategies that are used across tutorials for a variety of tasks. However, the large number of tutorials in these collections and their inconsistent formats can make it difficult for users to systematically explore and compare them. Current tutorial collections do not exploit the underlying command-level structure of tutorials, and to explore the collection users have to either page through long lists of tutorial titles or perform keyword searches on the natural language tutorial text. We present a new browsing interface to help learners navigate, explore and compare collections of photo manipulation tutorials based on their command-level structure. Our browser indexes tutorials by their commands, identifies common strategies within the tutorial collection, and highlights the similarities and differences between sets of tutorials that execute the same task. User feedback suggests that our interface is easy to understand and use, and that users find command-level browsing to be useful for exploring large tutorial collections. They strongly preferred to explore tutorial collections with our browser over keyword search. Finally, we present a framework for generating content-adaptive macros (programs) that can transfer complex photo manipulation procedures to new target images. After learners master a photo manipulation procedure, they often repeatedly apply it to multiple images. For example, they might routinely apply the same vignetting effect to all their photographs. This process can be very tedious especially for procedures that involve many steps. While image manipulation programs provide basic macro authoring tools that allow users to record and then replay a sequence of operations, these macros are very brittle and cannot adapt to new images. We present a more comprehensive approach for generating content-adaptive macros that can automatically transfer operations to new target images. To create these macro, we make use of multiple training demonstrations. Specifically, we use automated image labeling and machine learning techniques to to adapt the parameters of each operation to the new image content. We show that our framework is able to learn a large class of the most commonly-used manipulations using as few as 20 training demonstrations. Our content-adaptive macros allow users to transfer photo manipulation procedures with a single button click and thereby significantly simplify repetitive procedures.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
1989-03-01
KOWLEDGE INFERENCE IMAGE DAAAEENGINE DATABASE Automated Photointerpretation Testbed. 4.1.7 Fig. .1.1-2 An Initial Segmentation of an Image / zx...MRF) theory provide a powerful alternative texture model and have resulted in intensive research activity in MRF model- based texture analysis...interpretation process. 5. Additional, and perhaps more powerful , features have to be incorporated into the image segmentation procedure. 6. Object detection
Image recognition of clipped stigma traces in rice seeds
NASA Astrophysics Data System (ADS)
Cheng, F.; Ying, YB
2005-11-01
The objective of this research is to develop algorithm to recognize clipped stigma traces in rice seeds using image processing. At first, the micro-configuration of clipped stigma traces was observed with electronic scanning microscope. Then images of rice seeds were acquired with a color machine vision system. A digital image-processing algorithm based on morphological operations and Hough transform was developed to inspect the occurrence of clipped stigma traces. Five varieties of Jinyou402, Shanyou10, Zhongyou207, Jiayou and you3207 were evaluated. The algorithm was implemented with all image sets using a Matlab 6.5 procedure. The results showed that the algorithm achieved an average accuracy of 96%. The algorithm was proved to be insensitive to the different rice seed varieties.
Biostatistical analysis of quantitative immunofluorescence microscopy images.
Giles, C; Albrecht, M A; Lam, V; Takechi, R; Mamo, J C
2016-12-01
Semiquantitative immunofluorescence microscopy has become a key methodology in biomedical research. Typical statistical workflows are considered in the context of avoiding pseudo-replication and marginalising experimental error. However, immunofluorescence microscopy naturally generates hierarchically structured data that can be leveraged to improve statistical power and enrich biological interpretation. Herein, we describe a robust distribution fitting procedure and compare several statistical tests, outlining their potential advantages/disadvantages in the context of biological interpretation. Further, we describe tractable procedures for power analysis that incorporates the underlying distribution, sample size and number of images captured per sample. The procedures outlined have significant potential for increasing understanding of biological processes and decreasing both ethical and financial burden through experimental optimization. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.
Localization of wood floor structure by infrared thermography
NASA Astrophysics Data System (ADS)
Cochior Plescanu, C.; Klein, M.; Ibarra-Castanedo, C.; Bendada, A.; Maldague, X.
2008-03-01
One of our industrial partners, Assek Technologie, is interested in developing a technique that would improve the drying process of wood floor in basements after flooding. In order to optimize the procedure, the floor structure and the damaged (wet) area extent must first be determined with minimum intrusion (minimum or no dismantling). The present study presents the use of infrared thermography to reveal the structure of (flooded) wood floors. The procedure involves opening holes in the floor. Injecting some hot air through those holes reveals the framing structure even if the floor is covered by vinyl or ceramic tiles. This study indicates that thermal imaging can also be used as a tool to validate the decontamination process after drying. Thermal images were obtained on small-scale models and in a demonstration room.
Crowdsourcing Based 3d Modeling
NASA Astrophysics Data System (ADS)
Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.
2016-06-01
Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.
Semi-automated camera trap image processing for the detection of ungulate fence crossing events.
Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija
2017-09-27
Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.
PET/CT (and CT) instrumentation, image reconstruction and data transfer for radiotherapy planning.
Sattler, Bernhard; Lee, John A; Lonsdale, Markus; Coche, Emmanuel
2010-09-01
The positron emission tomography in combination with CT in hybrid, cross-modality imaging systems (PET/CT) gains more and more importance as a part of the treatment-planning procedure in radiotherapy. Positron emission tomography (PET), as a integral part of nuclear medicine imaging and non-invasive imaging technique, offers the visualization and quantification of pre-selected tracer metabolism. In combination with the structural information from CT, this molecular imaging technique has great potential to support and improve the outcome of the treatment-planning procedure prior to radiotherapy. By the choice of the PET-Tracer, a variety of different metabolic processes can be visualized. First and foremost, this is the glucose metabolism of a tissue as well as for instance hypoxia or cell proliferation. This paper comprises the system characteristics of hybrid PET/CT systems. Acquisition and processing protocols are described in general and modifications to cope with the special needs in radiooncology. This starts with the different position of the patient on a special table top, continues with the use of the same fixation material as used for positioning of the patient in radiooncology while simulation and irradiation and leads to special processing protocols that include the delineation of the volumes that are subject to treatment planning and irradiation (PTV, GTV, CTV, etc.). General CT acquisition and processing parameters as well as the use of contrast enhancement of the CT are described. The possible risks and pitfalls the investigator could face during the hybrid-imaging procedure are explained and listed. The interdisciplinary use of different imaging modalities implies a increase of the volume of data created. These data need to be stored and communicated fast, safe and correct. Therefore, the DICOM-Standard provides objects and classes for this purpose (DICOM RT). Furthermore, the standard DICOM objects and classes for nuclear medicine (NM, PT) and computed tomography (CT) are used to communicate the actual image data created by the modalities. Care must be taken for data security, especially when transferring data across the (network-) borders of different hospitals. Overall, the most important precondition for successful integration of functional imaging in RT treatment planning is the goal orientated as well as close and thorough communication between nuclear medicine and radiotherapy departments on all levels of interaction (personnel, imaging protocols, GTV delineation, and selection of the data transfer method). Copyright 2010 European Society for Therapeutic Radiology and Oncology and European Association of Nuclear Medicine. Published by Elsevier Ireland Ltd.. All rights reserved.
Digital radiography: spatial and contrast resolution
NASA Astrophysics Data System (ADS)
Bjorkholm, Paul; Annis, M.; Frederick, E.; Stein, J.; Swift, R.
1981-07-01
The addition of digital image collection and storage to standard and newly developed x-ray imaging techniques has allowed spectacular improvements in some diagnostic procedures. There is no reason to expect that the developments in this area are yet complete. But no matter what further developments occur in this field, all the techniques will share a common element, digital image storage and processing. This common element alone determines some of the important imaging characteristics. These will be discussed using one system, the Medical MICRODOSE System as an example.
Real-time inspection by submarine images
NASA Astrophysics Data System (ADS)
Tascini, Guido; Zingaretti, Primo; Conte, Giuseppe
1996-10-01
A real-time application of computer vision concerning tracking and inspection of a submarine pipeline is described. The objective is to develop automatic procedures for supporting human operators in the real-time analysis of images acquired by means of cameras mounted on underwater remotely operated vehicles (ROV) Implementation of such procedures gives rise to a human-machine system for underwater pipeline inspection that can automatically detect and signal the presence of the pipe, of its structural or accessory elements, and of dangerous or alien objects in its neighborhood. The possibility of modifying the image acquisition rate in the simulations performed on video- recorded images is used to prove that the system performs all necessary processing with an acceptable robustness working in real-time up to a speed of about 2.5 kn, widely greater than that the actual ROVs and the security features allow.
Optical head tracking for functional magnetic resonance imaging using structured light.
Zaremba, Andrei A; MacFarlane, Duncan L; Tseng, Wei-Che; Stark, Andrew J; Briggs, Richard W; Gopinath, Kaundinya S; Cheshkov, Sergey; White, Keith D
2008-07-01
An accurate motion-tracking technique is needed to compensate for subject motion during functional magnetic resonance imaging (fMRI) procedures. Here, a novel approach to motion metrology is discussed. A structured light pattern specifically coded for digital signal processing is positioned onto a fiduciary of the patient. As the patient undergoes spatial transformations in 6 DoF (degrees of freedom), a high-resolution CCD camera captures successive images for analysis on a computing platform. A high-speed image processing algorithm is used to calculate spatial transformations in a time frame commensurate with patient movements (10-100 ms) and with a precision of at least 0.5 microm for translations and 0.1 deg for rotations.
Improving safety in CT through the use of educational media.
Mattingly, Melisa
2011-01-01
With a grant from the AHRA and Toshiba Putting Patients First program, Community Hospital in Indianapolis, IN set out to reduce the need for patient sedation, mechanical restraint, additional radiation dosage,and repeat procedures for pediatric patients. An online video was produced to educate pediatric patients and their caregivers about the diagnostic imaging process enabling them to be more comfortable and compliant during the procedure. Early information and results indicate a safer experience for the patient.The goal is for the video to become a new best practice tool for improving patient care and safety in diagnostic imaging.
a Novel Image Acquisition and Processing Procedure for Fast Tunnel Dsm Production
NASA Astrophysics Data System (ADS)
Roncella, R.; Umili, G.; Forlani, G.
2012-07-01
In mining operations the evaluation of the stability condition of the excavated front are critic to ensure a safe and correct planning of the subsequent activities. The procedure currently used to this aim has some shortcomings: safety for the geologist, completeness of data collection and objective documentation of the results. In the last decade it has been shown that the geostructural parameters necessary to the stability analysis can be derived from high resolution digital surface models (DSM) of rock faces. With the objective to overcome the limitation of the traditional survey and to minimize data capture times, so reducing delays on mining site operations, a photogrammetric system to generate high resolution DSM of tunnels has been realized. A fast, effective and complete data capture method has been developed and the orientation and restitution phases have been largely automated. The survey operations take no more than required to the traditional ones; no additional topographic measurements other than those available are required. To make the data processing fast and economic our Structure from Motion procedure has been slightly modified to adapt to the peculiar block geometry while, the DSM of the tunnel is created using automatic image correlation techniques. The geomechanical data are sampled on the DSM, by using the acquired images in a GUI and a segmentation procedure to select discontinuity planes. To allow an easier and faster identification of relevant features of the surface of the tunnel, using again an automatic procedure, an orthophoto of the tunnel is produced. A case study where a tunnel section of ca. 130 m has been surveyed is presented.
NASA Technical Reports Server (NTRS)
Szepesi, Z.
1978-01-01
The fabrication process and transfer characteristics for solid state radiographic image transducers (radiographic amplifier screens) are described. These screens are for use in realtime nondestructive evaluation procedures that require large format radiographic images with contrast and resolution capabilities unavailable with conventional fluoroscopic screens. The screens are suitable for in-motion, on-line radiographic inspection by means of closed circuit television. Experimental effort was made to improve image quality and response to low energy (5 kV and up) X-rays.
NASA Technical Reports Server (NTRS)
Kao, M. H.; Bodenheimer, R. E.
1976-01-01
The tse computer's capability of achieving image congruence between temporal and multiple images with misregistration due to rotational differences is reported. The coordinate transformations are obtained and a general algorithms is devised to perform image rotation using tse operations very efficiently. The details of this algorithm as well as its theoretical implications are presented. Step by step procedures of image registration are described in detail. Numerous examples are also employed to demonstrate the correctness and the effectiveness of the algorithms and conclusions and recommendations are made.
Techniques for Interventional MRI Guidance in Closed-Bore Systems.
Busse, Harald; Kahn, Thomas; Moche, Michael
2018-02-01
Efficient image guidance is the basis for minimally invasive interventions. In comparison with X-ray, computed tomography (CT), or ultrasound imaging, magnetic resonance imaging (MRI) provides the best soft tissue contrast without ionizing radiation and is therefore predestined for procedural control. But MRI is also characterized by spatial constraints, electromagnetic interactions, long imaging times, and resulting workflow issues. Although many technical requirements have been met over the years-most notably magnetic resonance (MR) compatibility of tools, interventional pulse sequences, and powerful processing hardware and software-there is still a large variety of stand-alone devices and systems for specific procedures only.Stereotactic guidance with the table outside the magnet is common and relies on proper registration of the guiding grids or manipulators to the MR images. Instrument tracking, often by optical sensing, can be added to provide the physicians with proper eye-hand coordination during their navigated approach. Only in very short wide-bore systems, needles can be advanced at the extended arm under near real-time imaging. In standard magnets, control and workflow may be improved by remote operation using robotic or manual driving elements.This work highlights a number of devices and techniques for different interventional settings with a focus on percutaneous, interstitial procedures in different organ regions. The goal is to identify technical and procedural elements that might be relevant for interventional guidance in a broader context, independent of the clinical application given here. Key challenges remain the seamless integration into the interventional workflow, safe clinical translation, and proper cost effectiveness.
PyDBS: an automated image processing workflow for deep brain stimulation surgery.
D'Albis, Tiziano; Haegelen, Claire; Essert, Caroline; Fernández-Vidal, Sara; Lalys, Florent; Jannin, Pierre
2015-02-01
Deep brain stimulation (DBS) is a surgical procedure for treating motor-related neurological disorders. DBS clinical efficacy hinges on precise surgical planning and accurate electrode placement, which in turn call upon several image processing and visualization tasks, such as image registration, image segmentation, image fusion, and 3D visualization. These tasks are often performed by a heterogeneous set of software tools, which adopt differing formats and geometrical conventions and require patient-specific parameterization or interactive tuning. To overcome these issues, we introduce in this article PyDBS, a fully integrated and automated image processing workflow for DBS surgery. PyDBS consists of three image processing pipelines and three visualization modules assisting clinicians through the entire DBS surgical workflow, from the preoperative planning of electrode trajectories to the postoperative assessment of electrode placement. The system's robustness, speed, and accuracy were assessed by means of a retrospective validation, based on 92 clinical cases. The complete PyDBS workflow achieved satisfactory results in 92 % of tested cases, with a median processing time of 28 min per patient. The results obtained are compatible with the adoption of PyDBS in clinical practice.
Object Segmentation and Ground Truth in 3D Embryonic Imaging.
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets.
Object Segmentation and Ground Truth in 3D Embryonic Imaging
Rajasekaran, Bhavna; Uriu, Koichiro; Valentin, Guillaume; Tinevez, Jean-Yves; Oates, Andrew C.
2016-01-01
Many questions in developmental biology depend on measuring the position and movement of individual cells within developing embryos. Yet, tools that provide this data are often challenged by high cell density and their accuracy is difficult to measure. Here, we present a three-step procedure to address this problem. Step one is a novel segmentation algorithm based on image derivatives that, in combination with selective post-processing, reliably and automatically segments cell nuclei from images of densely packed tissue. Step two is a quantitative validation using synthetic images to ascertain the efficiency of the algorithm with respect to signal-to-noise ratio and object density. Finally, we propose an original method to generate reliable and experimentally faithful ground truth datasets: Sparse-dense dual-labeled embryo chimeras are used to unambiguously measure segmentation errors within experimental data. Together, the three steps outlined here establish a robust, iterative procedure to fine-tune image analysis algorithms and microscopy settings associated with embryonic 3D image data sets. PMID:27332860
Span graphics display utilities handbook, first edition
NASA Technical Reports Server (NTRS)
Gallagher, D. L.; Green, J. L.; Newman, R.
1985-01-01
The Space Physics Analysis Network (SPAN) is a computer network connecting scientific institutions throughout the United States. This network provides an avenue for timely, correlative research between investigators, in a multidisciplinary approach to space physics studies. An objective in the development of SPAN is to make available direct and simplified procedures that scientists can use, without specialized training, to exchange information over the network. Information exchanges include raw and processes data, analysis programs, correspondence, documents, and graphite images. This handbook details procedures that can be used to exchange graphic images over SPAN. The intent is to periodically update this handbook to reflect the constantly changing facilities available on SPAN. The utilities described within reflect an earnest attempt to provide useful descriptions of working utilities that can be used to transfer graphic images across the network. Whether graphic images are representative of satellite servations or theoretical modeling and whether graphics images are of device dependent or independent type, the SPAN graphics display utilities handbook will be the users guide to graphic image exchange.
A simple method for imaging axonal transport in aging neurons using the adult Drosophila wing.
Vagnoni, Alessio; Bullock, Simon L
2016-09-01
There is growing interest in the link between axonal cargo transport and age-associated neuronal dysfunction. The study of axonal transport in neurons of adult animals requires intravital or ex vivo imaging approaches, which are laborious and expensive in vertebrate models. We describe simple, noninvasive procedures for imaging cargo motility within axons using sensory neurons of the translucent Drosophila wing. A key aspect is a method for mounting the intact fly that allows detailed imaging of transport in wing neurons. Coupled with existing genetic tools in Drosophila, this is a tractable system for studying axonal transport over the life span of an animal and thus for characterization of the relationship between cargo dynamics, neuronal aging and disease. Preparation of a sample for imaging takes ∼5 min, with transport typically filmed for 2-3 min per wing. We also document procedures for the quantification of transport parameters from the acquired images and describe how the protocol can be adapted to study other cell biological processes in aging neurons.
A phase space model of Fourier ptychographic microscopy
Horstmeyer, Roarke; Yang, Changhuei
2014-01-01
A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995
Kirkwood, Melissa L; Guild, Jeffrey B; Arbique, Gary M; Tsai, Shirling; Modrall, J Gregory; Anderson, Jon A; Rectenwald, John; Timaran, Carlos
2016-11-01
A new proprietary image-processing system known as AlluraClarity, developed by Philips Healthcare (Best, The Netherlands) for radiation-based interventional procedures, claims to lower radiation dose while preserving image quality using noise-reduction algorithms. This study determined whether the surgeon and patient radiation dose during complex endovascular procedures (CEPs) is decreased after the implementation of this new operating system. Radiation dose to operators, procedure type, reference air kerma, kerma area product, and patient body mass index were recorded during CEPs on two Philips Allura FD 20 fluoroscopy systems with and without Clarity. Operator dose during CEPs was measured using optically stimulable, luminescent nanoDot (Landauer Inc, Glenwood, Ill) detectors placed outside the lead apron at the left upper chest position. nanoDots were read using a microStar ii (Landauer Inc) medical dosimetry system. For the CEPs in the Clarity group, the radiation dose to surgeons was also measured by the DoseAware (Philips Healthcare) personal dosimetry system. Side-by-side measurements of DoseAware and nanoDots allowed for cross-calibration between systems. Operator effective dose was determined using a modified Niklason algorithm. To control for patient size and case complexity, the average fluoroscopy dose rate and the dose per radiographic frame were adjusted for body mass index differences and then compared between the groups with and without Clarity by procedure. Additional factors, for example, physician practice patterns, that may have affected operator dose were inferred by comparing the ratio of the operator dose to procedural kerma area product with and without Clarity. A one-sided Wilcoxon rank sum test was used to compare groups for radiation doses, reference air kermas, and operating practices for each procedure type. The analysis included 234 CEPs; 95 performed without Clarity and 139 with Clarity. Practice patterns of operators during procedures with and without Clarity were not significantly different. For all cases, procedure radiation dose to the patient and the primary and assistant operators were significantly decreased in the Clarity group by 60% compared with the non-Clarity group. By procedure type, fluorography dose rates decreased from 44% for fenestrated endovascular repair and up to 70% with lower extremity interventions. Fluoroscopy dose rates also significantly decreased, from about 37% to 47%, depending on procedure type. The AlluraClarity system reduces the patient and primary operator's radiation dose by more than half during CEPs. This feature appears to be an effective tool in lowering the radiation dose while maintaining image quality. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R; Murshudov, Garib N; Short, Judith M; Scheres, Sjors H W; Henderson, Richard
2013-12-01
Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an unbiased FSC from the two curves, even when a substantial amount of overfitting is present. The approach is software independent. The user is therefore completely free to use any established method or novel combination of methods, provided the HR-noise test is carried out in parallel. Applying this procedure to cryoEM images of beta-galactosidase shows how overfitting varies greatly depending on the procedure, but in the best case shows no overfitting and a resolution of ~6 Å. (382 words). © 2013 The Authors. Published by Elsevier B.V. All rights reserved.
The PDS-based Data Processing, Archiving and Management Procedures in Chang'e Mission
NASA Astrophysics Data System (ADS)
Zhang, Z. B.; Li, C.; Zhang, H.; Zhang, P.; Chen, W.
2017-12-01
PDS is adopted as standard format of scientific data and foundation of all data-related procedures in Chang'e mission. Unlike the geographically distributed nature of the planetary data system, all procedures of data processing, archiving, management and distribution are proceeded in the headquarter of Ground Research and Application System of Chang'e mission in a centralized manner. The RAW data acquired by the ground stations is transmitted to and processed by data preprocessing subsystem (DPS) for the production of PDS-compliant Level 0 Level 2 data products using established algorithms, with each product file being well described using an attached label, then all products with the same orbit number are put together into a scheduled task for archiving along with a XML archive list file recoding all product files' properties such as file name, file size etc. After receiving the archive request from DPS, data management subsystem (DMS) is provoked to parse the XML list file to validate all the claimed files and their compliance to PDS using a prebuilt data dictionary, then to exact metadata of each data product file from its PDS label and the fields of its normalized filename. Various requirements of data management, retrieving, distribution and application can be well met using the flexible combination of the rich metadata empowered by the PDS. In the forthcoming CE-5 mission, all the design of data structure and procedures will be updated from PDS version 3 used in previous CE-1, CE-2 and CE-3 missions to the new version 4, the main changes would be: 1) a dedicated detached XML label will be used to describe the corresponding scientific data acquired by the 4 instruments carried, the XML parsing framework used in archive list validation will be reused for the label after some necessary adjustments; 2) all the image data acquired by the panorama camera, landing camera and lunar mineralogical spectrometer should use an Array_2D_Image/Array_3D_Image object to store image data, and use a Table_Character object to store image frame header; the tabulated data acquired by the lunar regolith penetrating radar should use a Table_Binary object to store measurements.
78 FR 67076 - Practices and Procedures
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-08
... as an attachment in any common electronic format, including word processing applications, HTML and PDF. If possible, commenters are asked to use a text format and not an image format for attachments...
Applications of satellite image processing to the analysis of Amazonian cultural ecology
NASA Technical Reports Server (NTRS)
Behrens, Clifford A.
1991-01-01
This paper examines the application of satellite image processing towards identifying and comparing resource exploitation among indigenous Amazonian peoples. The use of statistical and heuristic procedures for developing land cover/land use classifications from Thematic Mapper satellite imagery will be discussed along with actual results from studies of relatively small (100 - 200 people) settlements. Preliminary research indicates that analysis of satellite imagery holds great potential for measuring agricultural intensification, comparing rates of tropical deforestation, and detecting changes in resource utilization patterns over time.
NASA Astrophysics Data System (ADS)
Amalia, A.; Rachmawati, D.; Lestari, I. A.; Mourisa, C.
2018-03-01
Colposcopy has been used primarily to diagnose pre-cancer and cancerous lesions because this procedure gives an exaggerated view of the tissues of the vagina and the cervix. But, the poor quality of colposcopy image sometimes makes physician challenging to recognize and analyze it. Generally, Implementation of image processing to identify cervical cancer have to implement a complex classification or clustering method. In this study, we wanted to prove that by only applying the identification of edge detection in the colposcopy image, identification of cervical cancer can be determined. In this study, we implement and comparing two edge detection operator which are isotropic and canny operator. Research methodology in this paper composed by image processing, training, and testing stages. In the image processing step, colposcopy image transformed by nth root power transformation to get better detection result and continued with edge detection process. Training is a process of labelling all dataset image with cervical cancer stage. This process involved pathology doctor as an expert in diagnosing the colposcopy image as a reference. Testing is a process of deciding cancer stage classification by comparing the similarity image of colposcopy results in the testing stage with the image of the results of the training process. We used 30 images as a dataset. The result gets same accuracy which is 80% for both Canny or Isotropic operator. Average running time for Canny operator implementation is 0.3619206 ms while Isotropic get 1.49136262 ms. The result showed that Canny operator is better than isotropic operator because Canny operator generates a more precise edge with a fast time instead.
NASA Astrophysics Data System (ADS)
Bethmann, F.; Jepping, C.; Luhmann, T.
2013-04-01
This paper reports on a method for the generation of synthetic image data for almost arbitrary static or dynamic 3D scenarios. Image data generation is based on pre-defined 3D objects, object textures, camera orientation data and their imaging properties. The procedure does not focus on the creation of photo-realistic images under consideration of complex imaging and reflection models as they are used by common computer graphics programs. In contrast, the method is designed with main emphasis on geometrically correct synthetic images without radiometric impact. The calculation process includes photogrammetric distortion models, hence cameras with arbitrary geometric imaging characteristics can be applied. Consequently, image sets can be created that are consistent to mathematical photogrammetric models to be used as sup-pixel accurate data for the assessment of high-precision photogrammetric processing methods. In the first instance the paper describes the process of image simulation under consideration of colour value interpolation, MTF/PSF and so on. Subsequently the geometric quality of the synthetic images is evaluated with ellipse operators. Finally, simulated image sets are used to investigate matching and tracking algorithms as they have been developed at IAPG for deformation measurement in car safety testing.
Different binarization processes validated against manual counts of fluorescent bacterial cells.
Tamminga, Gerrit G; Paulitsch-Fuchs, Astrid H; Jansen, Gijsbert J; Euverink, Gert-Jan W
2016-09-01
State of the art software methods (such as fixed value approaches or statistical approaches) to create a binary image of fluorescent bacterial cells are not as accurate and precise as they should be for counting bacteria and measuring their area. To overcome these bottlenecks, we introduce biological significance to obtain a binary image from a greyscale microscopic image. Using our biological significance approach we are able to automatically count about the same number of cells as an individual researcher would do by manual/visual counting. Using the fixed value or statistical approach to obtain a binary image leads to about 20% less cells in automatic counting. In our procedure we included the area measurements of the bacterial cells to determine the right parameters for background subtraction and threshold values. In an iterative process the threshold and background subtraction values were incremented until the number of particles smaller than a typical bacterial cell is less than the number of bacterial cells with a certain area. This research also shows that every image has a specific threshold with respect to the optical system, magnification and staining procedure as well as the exposure time. The biological significance approach shows that automatic counting can be performed with the same accuracy, precision and reproducibility as manual counting. The same approach can be used to count bacterial cells using different optical systems (Leica, Olympus and Navitar), magnification factors (200× and 400×), staining procedures (DNA (Propidium Iodide) and RNA (FISH)) and substrates (polycarbonate filter or glass). Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Lee, I.-Chieh
Shoreline delineation and shoreline change detection are expensive processes in data source acquisition and manual shoreline delineation. These costs confine the frequency and interval of shoreline mapping periods. In this dissertation, a new shoreline delineation approach was developed targeting on lowering the data source cost and reducing human labor. To lower the cost of data sources, we used the public domain LiDAR data sets and satellite images to delineate shorelines without the requirement of data sets being acquired simultaneously, which is a new concept in this field. To reduce the labor cost, we made improvements in classifying LiDAR points and satellite images. Analyzing shadow relations with topography to improve the satellite image classification performance is also a brand-new concept. The extracted shoreline of the proposed approach could achieve an accuracy of 1.495 m RMSE, or 4.452m at the 95% confidence level. Consequently, the proposed approach could successfully lower the cost and shorten the processing time, in other words, to increase the shoreline mapping frequency with a reasonable accuracy. However, the extracted shoreline may not compete with the shoreline extracted by aerial photogrammetric procedures in the aspect of accuracy. Hence, this is a trade-off between cost and accuracy. This approach consists of three phases, first, a shoreline extraction procedure based mainly on LiDAR point cloud data with multispectral information from satellite images. Second, an object oriented shoreline extraction procedure to delineate shoreline solely from satellite images; in this case WorldView-2 images were used. Third, a shoreline integration procedure combining these two shorelines based on actual shoreline changes and physical terrain properties. The actual data source cost would only be from the acquisition of satellite images. On the other hand, only two processes needed human attention. First, the shoreline within harbor areas needed to be manually connected, for its length was less than 3% of the total shoreline length in our dataset. Secondly, the parameters for satellite image classification needed to be manually determined. The need for manpower was significantly less compared to the ground surveying or aerial photogrammetry. The first phase of shoreline extraction was to utilize Normalized Difference Vegetation Index (NDVI), Mean-Shift segmentation on the coordinate (X, Y, Z), and attributes (multispectral bands from satellite images) of the LiDAR points to classify each LiDAR point into land or water surface. Boundary of the land points were then traced to create the shoreline. The second phase of shoreline extraction solely from satellite images utilized spectrum, NDVI, and shadow analysis to classify the satellite images into classes. These classes were then refined by mean-shift segmentation on the panchromatic band. By tracing the boundary of the water surface, the shoreline can be created. Since these two shorelines may represent different shoreline instances in time, evaluating the changes of shoreline was the first to be done. Then an independent scenario analysis and a procedure are performed for the shoreline of each of the three conditions: in the process of erosion, in the process of accession, and remaining the same. With these three conditions, we could analysis the actual terrain type and correct the classification errors to obtain a more accurate shoreline. Meanwhile, methods of evaluating the quality of shorelines had also been discussed. The experiment showed that there were three indicators could best represent the quality of the shoreline. These indicators were: (1) shoreline accuracy, (2) land area difference between extracted shoreline and ground truth shoreline, and (3) bias factor from shoreline quality metrics.
Automatic tracking of laparoscopic instruments for autonomous control of a cameraman robot.
Khoiy, Keyvan Amini; Mirbagheri, Alireza; Farahmand, Farzam
2016-01-01
An automated instrument tracking procedure was designed and developed for autonomous control of a cameraman robot during laparoscopic surgery. The procedure was based on an innovative marker-free segmentation algorithm for detecting the tip of the surgical instruments in laparoscopic images. A compound measure of Saturation and Value components of HSV color space was incorporated that was enhanced further using the Hue component and some essential characteristics of the instrument segment, e.g., crossing the image boundaries. The procedure was then integrated into the controlling system of the RoboLens cameraman robot, within a triple-thread parallel processing scheme, such that the tip is always kept at the center of the image. Assessment of the performance of the system on prerecorded real surgery movies revealed an accuracy rate of 97% for high quality images and about 80% for those suffering from poor lighting and/or blood, water and smoke noises. A reasonably satisfying performance was also observed when employing the system for autonomous control of the robot in a laparoscopic surgery phantom, with a mean time delay of 200ms. It was concluded that with further developments, the proposed procedure can provide a practical solution for autonomous control of cameraman robots during laparoscopic surgery operations.
NASA Technical Reports Server (NTRS)
Roth, Don J.; Hendricks, J. Lynne; Whalen, Mike F.; Bodis, James R.; Martin, Katherine
1996-01-01
This article describes the commercial implementation of ultrasonic velocity imaging methods developed and refined at NASA Lewis Research Center on the Sonix c-scan inspection system. Two velocity imaging methods were implemented: thickness-based and non-thickness-based reflector plate methods. The article demonstrates capabilities of the commercial implementation and gives the detailed operating procedures required for Sonix customers to achieve optimum velocity imaging results. This commercial implementation of velocity imaging provides a 100x speed increase in scanning and processing over the lab-based methods developed at LeRC. The significance of this cooperative effort is that the aerospace and other materials development-intensive industries which use extensive ultrasonic inspection for process control and failure analysis will now have an alternative, highly accurate imaging method commercially available.
Improving the Performance of the Prony Method Using a Wavelet Domain Filter for MRI Denoising
Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method. PMID:24834108
Improving the performance of the prony method using a wavelet domain filter for MRI denoising.
Jaramillo, Rodney; Lentini, Marianela; Paluszny, Marco
2014-01-01
The Prony methods are used for exponential fitting. We use a variant of the Prony method for abnormal brain tissue detection in sequences of T 2 weighted magnetic resonance images. Here, MR images are considered to be affected only by Rician noise, and a new wavelet domain bilateral filtering process is implemented to reduce the noise in the images. This filter is a modification of Kazubek's algorithm and we use synthetic images to show the ability of the new procedure to suppress noise and compare its performance with respect to the original filter, using quantitative and qualitative criteria. The tissue classification process is illustrated using a real sequence of T 2 MR images, and the filter is applied to each image before using the variant of the Prony method.
MATHEMATICAL METHODS IN MEDICAL IMAGE PROCESSING
ANGENENT, SIGURD; PICHON, ERIC; TANNENBAUM, ALLEN
2013-01-01
In this paper, we describe some central mathematical problems in medical imaging. The subject has been undergoing rapid changes driven by better hardware and software. Much of the software is based on novel methods utilizing geometric partial differential equations in conjunction with standard signal/image processing techniques as well as computer graphics facilitating man/machine interactions. As part of this enterprise, researchers have been trying to base biomedical engineering principles on rigorous mathematical foundations for the development of software methods to be integrated into complete therapy delivery systems. These systems support the more effective delivery of many image-guided procedures such as radiation therapy, biopsy, and minimally invasive surgery. We will show how mathematics may impact some of the main problems in this area, including image enhancement, registration, and segmentation. PMID:23645963
Image processing system for the measurement of timber truck loads
NASA Astrophysics Data System (ADS)
Carvalho, Fernando D.; Correia, Bento A. B.; Davies, Roger; Rodrigues, Fernando C.; Freitas, Jose C. A.
1993-01-01
The paper industry uses wood as its raw material. To know the quantity of wood in the pile of sawn tree trunks, every truck load entering the plant is measured to determine its volume. The objective of this procedure is to know the solid volume of wood stocked in the plant. Weighing the tree trunks has its own problems, due to their high capacity for absorbing water. Image processing techniques were used to evaluate the volume of a truck load of logs of wood. The system is based on a PC equipped with an image processing board using data flow processors. Three cameras allow image acquisition of the sides and rear of the truck. The lateral images contain information about the sectional area of the logs, and the rear image contains information about the length of the logs. The machine vision system and the implemented algorithms are described. The results being obtained with the industrial prototype that is now installed in a paper mill are also presented.
Medical image analysis with artificial neural networks.
Jiang, J; Trundle, P; Ren, J
2010-12-01
Given that neural networks have been widely reported in the research community of medical imaging, we provide a focused literature survey on recent neural network developments in computer-aided diagnosis, medical image segmentation and edge detection towards visual content analysis, and medical image registration for its pre-processing and post-processing, with the aims of increasing awareness of how neural networks can be applied to these areas and to provide a foundation for further research and practical development. Representative techniques and algorithms are explained in detail to provide inspiring examples illustrating: (i) how a known neural network with fixed structure and training procedure could be applied to resolve a medical imaging problem; (ii) how medical images could be analysed, processed, and characterised by neural networks; and (iii) how neural networks could be expanded further to resolve problems relevant to medical imaging. In the concluding section, a highlight of comparisons among many neural network applications is included to provide a global view on computational intelligence with neural networks in medical imaging. Copyright © 2010 Elsevier Ltd. All rights reserved.
Deng, Yufeng; Rouze, Ned C.; Palmeri, Mark L.; Nightingale, Kathryn R.
2017-01-01
Ultrasound elasticity imaging has been developed over the last decade to estimate tissue stiffness. Shear wave elasticity imaging (SWEI) quantifies tissue stiffness by measuring the speed of propagating shear waves following acoustic radiation force excitation. This work presents the sequencing and data processing protocols of SWEI using a Verasonics system. The selection of the sequence parameters in a Verasonics programming script is discussed in detail. The data processing pipeline to calculate group shear wave speed (SWS), including tissue motion estimation, data filtering, and SWS estimation is demonstrated. In addition, the procedures for calibration of beam position, scanner timing, and transducer face heating are provided to avoid SWS measurement bias and transducer damage. PMID:28092508
Quantitative optical diagnostics in pathology recognition and monitoring of tissue reaction to PDT
NASA Astrophysics Data System (ADS)
Kirillin, Mikhail; Shakhova, Maria; Meller, Alina; Sapunov, Dmitry; Agrba, Pavel; Khilov, Alexander; Pasukhin, Mikhail; Kondratieva, Olga; Chikalova, Ksenia; Motovilova, Tatiana; Sergeeva, Ekaterina; Turchin, Ilya; Shakhova, Natalia
2017-07-01
Optical coherence tomography (OCT) is currently actively introduced into clinical practice. Besides diagnostics, it can be efficiently employed for treatment monitoring allowing for timely correction of the treatment procedure. In monitoring of photodynamic therapy (PDT) traditionally employed fluorescence imaging (FI) can benefit from complementary use of OCT. Additional diagnostic efficiency can be derived from numerical processing of optical diagnostics data providing more information compared to visual evaluation. In this paper we report on application of OCT together with numerical processing for clinical diagnostic in gynecology and otolaryngology, for monitoring of PDT in otolaryngology and on OCT and FI applications in clinical and aesthetic dermatology. Image numerical processing and quantification provides increase in diagnostic accuracy. Keywords: optical coherence tomography, fluorescence imaging, photod
Techniques for using diazo materials in remote sensor data analysis
NASA Technical Reports Server (NTRS)
Whitebay, L. E.; Mount, S.
1978-01-01
The use of data derived from LANDSAT is facilitated when special products or computer enhanced images can be analyzed. However, the facilities required to produce and analyze such products prevent many users from taking full advantages of the LANDSAT data. A simple, low-cost method is presented by which users can make their own specially enhanced composite images from the four band black and white LANDSAT images by using the diazo process. The diazo process is described and a detailed procedure for making various color composites, such as color infrared, false natural color, and false color, is provided. The advantages and limitations of the diazo process are discussed. A brief discussion interpretation of diazo composites for land use mapping with some typical examples is included.
32 CFR 513.2 - Administrative procedures for processing complaints.
Code of Federal Regulations, 2010 CFR
2010-07-01
... OF CIVIL AUTHORITIES AND PUBLIC RELATIONS INDEBTEDNESS OF MILITARY PERSONNEL § 513.2 Administrative... affects the Army's public image. Also, explain that the willful failure to resolve unpaid debts may result...
How much can a single webcam tell to the operation of a water system?
NASA Astrophysics Data System (ADS)
Giuliani, Matteo; Castelletti, Andrea; Fedorov, Roman; Fraternali, Piero
2017-04-01
Recent advances in environmental monitoring are making a wide range of hydro-meteorological data available with a great potential to enhance understanding, modelling and management of environmental processes. Despite this progress, continuous monitoring of highly spatiotemporal heterogeneous processes is not well established yet, especially in inaccessible sites. In this context, the unprecedented availability of user-generated data on the web might open new opportunities for enhancing real-time monitoring and modeling of environmental systems based on data that are public, low-cost, and spatiotemporally dense. In this work, we focus on snow and contribute a novel crowdsourcing procedure for extracting snow-related information from public web images, either produced by users or generated by touristic webcams. A fully automated process fetches mountain images from multiple sources, identifies the peaks present therein, and estimates virtual snow indexes representing a proxy of the snow-covered area. The operational value of the obtained virtual snow indexes is then assessed for a real-world water-management problem, where we use these indexes for informing the daily control of a regulated lake supplying water for multiple purposes. Numerical results show that such information is effective in extending the anticipation capacity of the lake operations, ultimately improving the system performance. Our procedure has the potential for complementing traditional snow-related information, minimizing costs and efforts for obtaining the virtual snow indexes and, at the same time, maximizing the portability of the procedure to several locations where such public images are available.
GPU-Based Real-Time Volumetric Ultrasound Image Reconstruction for a Ring Array
Choe, Jung Woo; Nikoozadeh, Amin; Oralkan, Ömer; Khuri-Yakub, Butrus T.
2014-01-01
Synthetic phased array (SPA) beamforming with Hadamard coding and aperture weighting is an optimal option for real-time volumetric imaging with a ring array, a particularly attractive geometry in intracardiac and intravascular applications. However, the imaging frame rate of this method is limited by the immense computational load required in synthetic beamforming. For fast imaging with a ring array, we developed graphics processing unit (GPU)-based, real-time image reconstruction software that exploits massive data-level parallelism in beamforming operations. The GPU-based software reconstructs and displays three cross-sectional images at 45 frames per second (fps). This frame rate is 4.5 times higher than that for our previously-developed multi-core CPU-based software. In an alternative imaging mode, it shows one B-mode image rotating about the axis and its maximum intensity projection (MIP), processed at a rate of 104 fps. This paper describes the image reconstruction procedure on the GPU platform and presents the experimental images obtained using this software. PMID:23529080
Shinbane, Jerold S; Saxon, Leslie A
Advances in imaging technology have led to a paradigm shift from planning of cardiovascular procedures and surgeries requiring the actual patient in a "brick and mortar" hospital to utilization of the digitalized patient in the virtual hospital. Cardiovascular computed tomographic angiography (CCTA) and cardiovascular magnetic resonance (CMR) digitalized 3-D patient representation of individual patient anatomy and physiology serves as an avatar allowing for virtual delineation of the most optimal approaches to cardiovascular procedures and surgeries prior to actual hospitalization. Pre-hospitalization reconstruction and analysis of anatomy and pathophysiology previously only accessible during the actual procedure could potentially limit the intrinsic risks related to time in the operating room, cardiac procedural laboratory and overall hospital environment. Although applications are specific to areas of cardiovascular specialty focus, there are unifying themes related to the utilization of technologies. The virtual patient avatar computer can also be used for procedural planning, computational modeling of anatomy, simulation of predicted therapeutic result, printing of 3-D models, and augmentation of real time procedural performance. Examples of the above techniques are at various stages of development for application to the spectrum of cardiovascular disease processes, including percutaneous, surgical and hybrid minimally invasive interventions. A multidisciplinary approach within medicine and engineering is necessary for creation of robust algorithms for maximal utilization of the virtual patient avatar in the digital medical center. Utilization of the virtual advanced cardiac imaging patient avatar will play an important role in the virtual health care system. Although there has been a rapid proliferation of early data, advanced imaging applications require further assessment and validation of accuracy, reproducibility, standardization, safety, efficacy, quality, cost effectiveness, and overall value to medical care. Copyright © 2018 Society of Cardiovascular Computed Tomography. Published by Elsevier Inc. All rights reserved.
A multiscale Markov random field model in wavelet domain for image segmentation
NASA Astrophysics Data System (ADS)
Dai, Peng; Cheng, Yu; Wang, Shengchun; Du, Xinyu; Wu, Dan
2017-07-01
The human vision system has abilities for feature detection, learning and selective attention with some properties of hierarchy and bidirectional connection in the form of neural population. In this paper, a multiscale Markov random field model in the wavelet domain is proposed by mimicking some image processing functions of vision system. For an input scene, our model provides its sparse representations using wavelet transforms and extracts its topological organization using MRF. In addition, the hierarchy property of vision system is simulated using a pyramid framework in our model. There are two information flows in our model, i.e., a bottom-up procedure to extract input features and a top-down procedure to provide feedback controls. The two procedures are controlled simply by two pyramidal parameters, and some Gestalt laws are also integrated implicitly. Equipped with such biological inspired properties, our model can be used to accomplish different image segmentation tasks, such as edge detection and region segmentation.
Inter-laboratory comparison of the in vivo comet assay including three image analysis systems.
Plappert-Helbig, Ulla; Guérard, Melanie
2015-12-01
To compare the extent of potential inter-laboratory variability and the influence of different comet image analysis systems, in vivo comet experiments were conducted using the genotoxicants ethyl methanesulfonate and methyl methanesulfonate. Tissue samples from the same animals were processed and analyzed-including independent slide evaluation by image analysis-in two laboratories with extensive experience in performing the comet assay. The analysis revealed low inter-laboratory experimental variability. Neither the use of different image analysis systems, nor the staining procedure of DNA (propidium iodide vs. SYBR® Gold), considerably impacted the results or sensitivity of the assay. In addition, relatively high stability of the staining intensity of propidium iodide-stained slides was found in slides that were refrigerated for over 3 months. In conclusion, following a thoroughly defined protocol and standardized routine procedures ensures that the comet assay is robust and generates comparable results between different laboratories. © 2015 Wiley Periodicals, Inc.
Adaptive segmentation of nuclei in H&S stained tendon microscopy
NASA Astrophysics Data System (ADS)
Chuang, Bo-I.; Wu, Po-Ting; Hsu, Jian-Han; Jou, I.-Ming; Su, Fong-Chin; Sun, Yung-Nien
2015-12-01
Tendiopathy is a popular clinical issue in recent years. In most cases like trigger finger or tennis elbow, the pathology change can be observed under H and E stained tendon microscopy. However, the qualitative analysis is too subjective and thus the results heavily depend on the observers. We develop an automatic segmentation procedure which segments and counts the nuclei in H and E stained tendon microscopy fast and precisely. This procedure first determines the complexity of images and then segments the nuclei from the image. For the complex images, the proposed method adopts sampling-based thresholding to segment the nuclei. While for the simple images, the Laplacian-based thresholding is employed to re-segment the nuclei more accurately. In the experiments, the proposed method is compared with the experts outlined results. The nuclei number of proposed method is closed to the experts counted, and the processing time of proposed method is much faster than the experts'.
Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko
2010-01-01
Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574
Different methods of image segmentation in the process of meat marbling evaluation
NASA Astrophysics Data System (ADS)
Ludwiczak, A.; Ślósarz, P.; Lisiak, D.; Przybylak, A.; Boniecki, P.; Stanisz, M.; Koszela, K.; Zaborowicz, M.; Przybył, K.; Wojcieszak, D.; Janczak, D.; Bykowska, M.
2015-07-01
The level of marbling in meat assessment based on digital images is very popular, as computer vision tools are becoming more and more advanced. However considering muscle cross sections as the data source for marbling level evaluation, there are still a few problems to cope with. There is a need for an accurate method which would facilitate this evaluation procedure and increase its accuracy. The presented research was conducted in order to compare the effect of different image segmentation tools considering their usefulness in meat marbling evaluation on the muscle anatomical cross - sections. However this study is considered to be an initial trial in the presented field of research and an introduction to ultrasonic images processing and analysis.
Duval, Joseph S.
1985-01-01
Because the display and interpretation of satellite and aircraft remote-sensing data make extensive use of color film products, accurate reproduction of the color images is important. To achieve accurate color reproduction, the exposure and chemical processing of the film must be monitored and controlled. By using a combination of sensitometry, densitometry, and transfer functions that control film response curves, all of the different steps in the making of film images can be monitored and controlled. Because a sensitometer produces a calibrated exposure, the resulting step wedge can be used to monitor the chemical processing of the film. Step wedges put on film by image recording machines provide a means of monitoring the film exposure and color balance of the machines.
Digital methods of recording color television images on film tape
NASA Astrophysics Data System (ADS)
Krivitskaya, R. Y.; Semenov, V. M.
1985-04-01
Three methods are now available for recording color television images on film tape, directly or after appropriate finish of signal processing. Conventional recording of images from the screens of three kinescopes with synthetic crystal face plates is still most effective for high fidelity. This method was improved by digital preprocessing of brightness color-difference signal. Frame-by-frame storage of these signals in the memory in digital form is followed by gamma and aperture correction and electronic correction of crossover distortions in the color layers of the film with fixing in accordance with specific emulsion procedures. The newer method of recording color television images with line arrays of light-emitting diodes involves dichromic superposing mirrors and a movable scanning mirror. This method allows the use of standard movie cameras, simplifies interlacing-to-linewise conversion and the mechanical equipment, and lengthens exposure time while it shortens recording time. The latest image transform method requires an audio-video recorder, a memory disk, a digital computer, and a decoder. The 9-step procedure includes preprocessing the total color television signal with reduction of noise level and time errors, followed by frame frequency conversion and setting the number of lines. The total signal is then resolved into its brightness and color-difference components and phase errors and image blurring are also reduced. After extraction of R,G,B signals and colorimetric matching of TV camera and film tape, the simultaneous R,B, B signals are converted from interlacing to sequential triades of color-quotient frames with linewise scanning at triple frequency. Color-quotient signals are recorded with an electron beam on a smoothly moving black-and-white film tape under vacuum. While digital techniques improve the signal quality and simplify the control of processes, not requiring stabilization of circuits, image processing is still analog.
36 CFR § 1238.14 - What are the microfilming requirements for permanent and unscheduled records?
Code of Federal Regulations, 2013 CFR
2013-07-01
... processing procedures in ANSI/AIIM MS1 and ANSI/AIIM MS23 (both incorporated by reference, see § 1238.5). (d... reference, see § 1238.5). (2) Background density of images. Agencies must use the background ISO standard... densities for images of documents are as follows: Classification Description of document Background density...
Development of a Hampton University Program for Novel Breast Cancer Imaging and Therapy Research
2013-04-01
intracavitary brachytherapy procedures during laboratory pre-clinical imaging and dosimetry equipment testing, calibration and data processing, in collaboration... electronics and detector instrumentation development; 4) breast phantom construction and implantation; 5) laboratory pre-clinical device testing...such as the ionization chamber, diode, radiographic verification 6 films and thermoluminescent dosimeters ( TLD ) but the scintillator fiber detectors
Clean Up Your Image: A Beginner's Guide to Scanning and Photoshop
ERIC Educational Resources Information Center
Stitzer, Michael S.
2005-01-01
In this article, the author addresses the key steps of scanning and illustrates the process with screen shots taken from a Macintosh G4 Powerbook computer running OSX and Adobe Photoshop 7.0. After reviewing scanning procedures, the author describes how to use Photoshop 7.0 to manipulate a scanned image. This activity gives students a good general…
Implications of electronic health record downtime: an analysis of patient safety event reports.
Larsen, Ethan; Fong, Allan; Wernz, Christian; Ratwani, Raj M
2018-02-01
We sought to understand the types of clinical processes, such as image and medication ordering, that are disrupted during electronic health record (EHR) downtime periods by analyzing the narratives of patient safety event report data. From a database of 80 381 event reports, 76 reports were identified as explicitly describing a safety event associated with an EHR downtime period. These reports were analyzed and categorized based on a developed code book to identify the clinical processes that were impacted by downtime. We also examined whether downtime procedures were in place and followed. The reports were coded into categories related to their reported clinical process: Laboratory, Medication, Imaging, Registration, Patient Handoff, Documentation, History Viewing, Delay of Procedure, and General. A majority of reports (48.7%, n = 37) were associated with lab orders and results, followed by medication ordering and administration (14.5%, n = 11). Incidents commonly involved patient identification and communication of clinical information. A majority of reports (46%, n = 35) indicated that downtime procedures either were not followed or were not in place. Only 27.6% of incidents (n = 21) indicated that downtime procedures were successfully executed. Patient safety report data offer a lens into EHR downtime-related safety hazards. Important areas of risk during EHR downtime periods were patient identification and communication of clinical information; these should be a focus of downtime procedure planning to reduce safety hazards. EHR downtime events pose patient safety hazards, and we highlight critical areas for downtime procedure improvement. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Aligning HST Images to Gaia: A Faster Mosaicking Workflow
NASA Astrophysics Data System (ADS)
Bajaj, V.
2017-11-01
We present a fully programmatic workflow for aligning HST images using the high-quality astrometry provided by Gaia Data Release 1. Code provided in a Jupyter Notebook works through this procedure, including parsing the data to determine the query area parameters, querying Gaia for the coordinate catalog, and using the catalog with TweakReg as reference catalog. This workflow greatly simplifies the normally time-consuming process of aligning HST images, especially those taken as part of mosaics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hakime, Antoine, E-mail: thakime@yahoo.com; Yevich, Steven; Tselikas, Lambros
PurposeTo assess whether fusion imaging-guided percutaneous microwave ablation (MWA) can improve visibility and targeting of liver metastasis that were deemed inconspicuous on ultrasound (US).Materials and MethodsMWA of liver metastasis not judged conspicuous enough on US was performed under CT/US fusion imaging guidance. The conspicuity before and after the fusion imaging was graded on a five-point scale, and significance was assessed by Wilcoxon test. Technical success, procedure time, and procedure-related complications were evaluated.ResultsA total of 35 patients with 40 liver metastases (mean size 1.3 ± 0.4 cm) were enrolled. Image fusion improved conspicuity sufficiently to allow fusion-targeted MWA in 33 patients. The time requiredmore » for image fusion processing and tumors’ identification averaged 10 ± 2.1 min (range 5–14). Initial conspicuity on US by inclusion criteria was 1.2 ± 0.4 (range 0–2), while conspicuity after localization on fusion imaging was 3.5 ± 1 (range 1–5, p < 0.001). Technical success rate was 83% (33/40) in intention-to-treat analysis and 100% in analysis of treated tumors. There were no major procedure-related complications.ConclusionsFusion imaging broadens the scope of US-guided MWA to metastasis lacking adequate conspicuity on conventional US. Fusion imaging is an effective tool to increase the conspicuity of liver metastases that were initially deemed non visualizable on conventional US imaging.« less
Concepts for on-board satellite image registration, volume 1
NASA Technical Reports Server (NTRS)
Ruedger, W. H.; Daluge, D. R.; Aanstoos, J. V.
1980-01-01
The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite.
An image-processing software package: UU and Fig for optical metrology applications
NASA Astrophysics Data System (ADS)
Chen, Lujie
2013-06-01
Modern optical metrology applications are largely supported by computational methods, such as phase shifting [1], Fourier Transform [2], digital image correlation [3], camera calibration [4], etc, in which image processing is a critical and indispensable component. While it is not too difficult to obtain a wide variety of image-processing programs from the internet; few are catered for the relatively special area of optical metrology. This paper introduces an image-processing software package: UU (data processing) and Fig (data rendering) that incorporates many useful functions to process optical metrological data. The cross-platform programs UU and Fig are developed based on wxWidgets. At the time of writing, it has been tested on Windows, Linux and Mac OS. The userinterface is designed to offer precise control of the underline processing procedures in a scientific manner. The data input/output mechanism is designed to accommodate diverse file formats and to facilitate the interaction with other independent programs. In terms of robustness, although the software was initially developed for personal use, it is comparably stable and accurate to most of the commercial software of similar nature. In addition to functions for optical metrology, the software package has a rich collection of useful tools in the following areas: real-time image streaming from USB and GigE cameras, computational geometry, computer vision, fitting of data, 3D image processing, vector image processing, precision device control (rotary stage, PZT stage, etc), point cloud to surface reconstruction, volume rendering, batch processing, etc. The software package is currently used in a number of universities for teaching and research.
Azizian, Mahdi; Khoshnam, Mahta; Najmaei, Nima; Patel, Rajni V
2014-09-01
Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field. Copyright © 2013 John Wiley & Sons, Ltd.
The application of time series models to cloud field morphology analysis
NASA Technical Reports Server (NTRS)
Chin, Roland T.; Jau, Jack Y. C.; Weinman, James A.
1987-01-01
A modeling method for the quantitative description of remotely sensed cloud field images is presented. A two-dimensional texture modeling scheme based on one-dimensional time series procedures is adopted for this purpose. The time series procedure used is the seasonal autoregressive, moving average (ARMA) process in Box and Jenkins. Cloud field properties such as directionality, clustering and cloud coverage can be retrieved by this method. It has been demonstrated that a cloud field image can be quantitatively defined by a small set of parameters and synthesized surrogates can be reconstructed from these model parameters. This method enables cloud climatology to be studied quantitatively.
Quantification of fibre polymerization through Fourier space image analysis
Nekouzadeh, Ali; Genin, Guy M.
2011-01-01
Quantification of changes in the total length of randomly oriented and possibly curved lines appearing in an image is a necessity in a wide variety of biological applications. Here, we present an automated approach based upon Fourier space analysis. Scaled, band-pass filtered power spectral densities of greyscale images are integrated to provide a quantitative measurement of the total length of lines of a particular range of thicknesses appearing in an image. A procedure is presented to correct for changes in image intensity. The method is most accurate for two-dimensional processes with fibres that do not occlude one another. PMID:24959096
Puccio, Benjamin; Pooley, James P; Pellman, John S; Taverna, Elise C; Craddock, R Cameron
2016-10-25
Skull-stripping is the procedure of removing non-brain tissue from anatomical MRI data. This procedure can be useful for calculating brain volume and for improving the quality of other image processing steps. Developing new skull-stripping algorithms and evaluating their performance requires gold standard data from a variety of different scanners and acquisition methods. We complement existing repositories with manually corrected brain masks for 125 T1-weighted anatomical scans from the Nathan Kline Institute Enhanced Rockland Sample Neurofeedback Study. Skull-stripped images were obtained using a semi-automated procedure that involved skull-stripping the data using the brain extraction based on nonlocal segmentation technique (BEaST) software, and manually correcting the worst results. Corrected brain masks were added into the BEaST library and the procedure was repeated until acceptable brain masks were available for all images. In total, 85 of the skull-stripped images were hand-edited and 40 were deemed to not need editing. The results are brain masks for the 125 images along with a BEaST library for automatically skull-stripping other data. Skull-stripped anatomical images from the Neurofeedback sample are available for download from the Preprocessed Connectomes Project. The resulting brain masks can be used by researchers to improve preprocessing of the Neurofeedback data, as training and testing data for developing new skull-stripping algorithms, and for evaluating the impact on other aspects of MRI preprocessing. We have illustrated the utility of these data as a reference for comparing various automatic methods and evaluated the performance of the newly created library on independent data.
Magnetic resonance-guided prostate interventions.
Haker, Steven J; Mulkern, Robert V; Roebuck, Joseph R; Barnes, Agnieska Szot; Dimaio, Simon; Hata, Nobuhiko; Tempany, Clare M C
2005-10-01
We review our experience using an open 0.5-T magnetic resonance (MR) interventional unit to guide procedures in the prostate. This system allows access to the patient and real-time MR imaging simultaneously and has made it possible to perform prostate biopsy and brachytherapy under MR guidance. We review MR imaging of the prostate and its use in targeted therapy, and describe our use of image processing methods such as image registration to further facilitate precise targeting. We describe current developments with a robot assist system being developed to aid radioactive seed placement.
Attenuation-emission alignment in cardiac PET∕CT based on consistency conditions
Alessio, Adam M.; Kinahan, Paul E.; Champley, Kyle M.; Caldwell, James H.
2010-01-01
Purpose: In cardiac PET and PET∕CT imaging, misaligned transmission and emission images are a common problem due to respiratory and cardiac motion. This misalignment leads to erroneous attenuation correction and can cause errors in perfusion mapping and quantification. This study develops and tests a method for automated alignment of attenuation and emission data. Methods: The CT-based attenuation map is iteratively transformed until the attenuation corrected emission data minimize an objective function based on the Radon consistency conditions. The alignment process is derived from previous work by Welch et al. [“Attenuation correction in PET using consistency information,” IEEE Trans. Nucl. Sci. 45, 3134–3141 (1998)] for stand-alone PET imaging. The process was evaluated with the simulated data and measured patient data from multiple cardiac ammonia PET∕CT exams. The alignment procedure was applied to simulations of five different noise levels with three different initial attenuation maps. For the measured patient data, the alignment procedure was applied to eight attenuation-emission combinations with initially acceptable alignment and eight combinations with unacceptable alignment. The initially acceptable alignment studies were forced out of alignment a known amount and quantitatively evaluated for alignment and perfusion accuracy. The initially unacceptable studies were compared to the proposed aligned images in a blinded side-by-side review. Results: The proposed automatic alignment procedure reduced errors in the simulated data and iteratively approaches global minimum solutions with the patient data. In simulations, the alignment procedure reduced the root mean square error to less than 5 mm and reduces the axial translation error to less than 1 mm. In patient studies, the procedure reduced the translation error by >50% and resolved perfusion artifacts after a known misalignment for the eight initially acceptable patient combinations. The side-by-side review of the proposed aligned attenuation-emission maps and initially misaligned attenuation-emission maps revealed that reviewers preferred the proposed aligned maps in all cases, except one inconclusive case. Conclusions: The proposed alignment procedure offers an automatic method to reduce attenuation correction artifacts in cardiac PET∕CT and provides a viable supplement to subjective manual realignment tools. PMID:20384256
TANGO: a generic tool for high-throughput 3D image analysis for studying nuclear organization.
Ollion, Jean; Cochennec, Julien; Loll, François; Escudé, Christophe; Boudier, Thomas
2013-07-15
The cell nucleus is a highly organized cellular organelle that contains the genetic material. The study of nuclear architecture has become an important field of cellular biology. Extracting quantitative data from 3D fluorescence imaging helps understand the functions of different nuclear compartments. However, such approaches are limited by the requirement for processing and analyzing large sets of images. Here, we describe Tools for Analysis of Nuclear Genome Organization (TANGO), an image analysis tool dedicated to the study of nuclear architecture. TANGO is a coherent framework allowing biologists to perform the complete analysis process of 3D fluorescence images by combining two environments: ImageJ (http://imagej.nih.gov/ij/) for image processing and quantitative analysis and R (http://cran.r-project.org) for statistical processing of measurement results. It includes an intuitive user interface providing the means to precisely build a segmentation procedure and set-up analyses, without possessing programming skills. TANGO is a versatile tool able to process large sets of images, allowing quantitative study of nuclear organization. TANGO is composed of two programs: (i) an ImageJ plug-in and (ii) a package (rtango) for R. They are both free and open source, available (http://biophysique.mnhn.fr/tango) for Linux, Microsoft Windows and Macintosh OSX. Distribution is under the GPL v.2 licence. thomas.boudier@snv.jussieu.fr Supplementary data are available at Bioinformatics online.
CONSUMER PREFERENCES FOR SCANNING MODALITY TO DIAGNOSE FOCAL LIVER LESIONS.
Whitty, Jennifer; Filby, Alexandra; Smith, Adam B; Carr, Louise M
2015-01-01
Differences in the process of using liver imaging technologies might be important to patients. This study aimed to investigate preferences for scanning modalities used in diagnosing focal liver lesions. A discrete choice experiment was administered to 504 adults aged 25 ≥years. Respondents made repeated choices between two hypothetical scans, described according to waiting time for scan and results, procedure type, the chance of minor side-effects, and whether further scanning procedures were likely to be required. Choice data were analyzed using mixed-logit models with respondent characteristics used to explain preference heterogeneity. Respondents preferred shorter waiting times, the procedure to be undertaken with a handheld scanner on a couch instead of within a body scanner, no side-effects, and no follow–up scans (p≤.01). The average respondent was willing to wait an additional 2 weeks for the scan if it resulted in avoiding side-effects, 1.5 weeks to avoid further procedures or to be told the results immediately, and 1 week to have the scan performed on a couch with a handheld scanner. However, substantial heterogeneity was observed in the strength of preference for desirable imaging characteristics. An average individual belonging to a general population sub–group most likely to require imaging to characterize focal liver lesions in the United Kingdom would prefer contrast–enhanced ultrasound over magnetic resonance imaging or computed tomography. Insights into the patient perspective around differential characteristics of imaging modalities have the potential to be used to guide recommendations around the use of these technologie
Experimental modelling of fragmentation applied to volcanic explosions
NASA Astrophysics Data System (ADS)
Haug, Øystein Thordén; Galland, Olivier; Gisler, Galen R.
2013-12-01
Explosions during volcanic eruptions cause fragmentation of magma and host rock, resulting in fragments with sizes ranging from boulders to fine ash. The products can be described by fragment size distributions (FSD), which commonly follow power laws with exponent D. The processes that lead to power-law distributions and the physical parameters that control D remain unknown. We developed a quantitative experimental procedure to study the physics of the fragmentation process through time. The apparatus consists of a Hele-Shaw cell containing a layer of cohesive silica flour that is fragmented by a rapid injection of pressurized air. The evolving fragmentation of the flour is monitored with a high-speed camera, and the images are analysed to obtain the evolution of the number of fragments (N), their average size (A), and the FSD. Using the results from our image-analysis procedure, we find transient empirical laws for N, A and the exponent D of the power-law FSD as functions of the initial air pressure. We show that our experimental procedure is a promising tool for unravelling the complex physics of fragmentation during phreatomagmatic and phreatic eruptions.
İntepe, Yavuz Selim; Metin, Bayram; Şahin, Sevinç; Kaya, Buğra; Okur, Aylin
2016-08-01
The objective of this study was to compare the results of transthoracic biopsies performed through the use of FDG PET/CT imaging with the results of transthoracic needle biopsy performed without using the FDG PET/CT imaging. The medical files of a total of 58 patients with pulmonary and mediastinal masses. A total of 20 patients, who were suspected of malignancy with the SUVmax value of over 2.5 in FDG PET/CT, underwent a biopsy process. Twelve patients with no suspicion of malignancy in accordance with CT images and with the SUVmax value below 2.5 underwent no biopsy procedure, and hence, they were excluded from the study. On the other hand, 26 patients directly went through a biopsy process with the suspicion of malignancy according to CT imaging, regardless of performing any FDG PET/CT imaging. According to the biopsy results, the number of the patients diagnosed with cancer was 20 (43.5%), while the number of non-cancerous patients was 26 (56.5%). When these findings were considered, it was determined that the sensitivity of the whole TTNB (transthoracic needle biopsy) was 80.8%, and the specificity was found as 100%. The positive predictive value of the whole TTNB was 100%, while its negative predictive value was found to be 80%. The sensitivity in TTNB performed together with FDG PET/CT was 90.9%, whereas the specificity was 100%. The positive predictive value of TTNB with FDG PET/CT was 100%, while its negative predictive value was found to be 81.8%. The sensitivity in TTNB performed without the use of FDG PET/CT was 73.3%, whereas the specificity was determined as 100%. Performing FDG PET/CT imaging process prior to a transthoracic biopsy as well as preferring FDG PET/CT for the spot on which the biopsy will be performed during the transthoracic biopsy procedure increases the rate of receiving accurate diagnosis.
Processing techniques for software based SAR processors
NASA Technical Reports Server (NTRS)
Leung, K.; Wu, C.
1983-01-01
Software SAR processing techniques defined to treat Shuttle Imaging Radar-B (SIR-B) data are reviewed. The algorithms are devised for the data processing procedure selection, SAR correlation function implementation, multiple array processors utilization, cornerturning, variable reference length azimuth processing, and range migration handling. The Interim Digital Processor (IDP) originally implemented for handling Seasat SAR data has been adapted for the SIR-B, and offers a resolution of 100 km using a processing procedure based on the Fast Fourier Transformation fast correlation approach. Peculiarities of the Seasat SAR data processing requirements are reviewed, along with modifications introduced for the SIR-B. An Advanced Digital SAR Processor (ADSP) is under development for use with the SIR-B in the 1986 time frame as an upgrade for the IDP, which will be in service in 1984-5.
HPT: A High Spatial Resolution Multispectral Sensor for Microsatellite Remote Sensing
Takahashi, Yukihiro; Sakamoto, Yuji; Kuwahara, Toshinori
2018-01-01
Although nano/microsatellites have great potential as remote sensing platforms, the spatial and spectral resolutions of an optical payload instrument are limited. In this study, a high spatial resolution multispectral sensor, the High-Precision Telescope (HPT), was developed for the RISING-2 microsatellite. The HPT has four image sensors: three in the visible region of the spectrum used for the composition of true color images, and a fourth in the near-infrared region, which employs liquid crystal tunable filter (LCTF) technology for wavelength scanning. Band-to-band image registration methods have also been developed for the HPT and implemented in the image processing procedure. The processed images were compared with other satellite images, and proven to be useful in various remote sensing applications. Thus, LCTF technology can be considered an innovative tool that is suitable for future multi/hyperspectral remote sensing by nano/microsatellites. PMID:29463022
Quantification of chromatin condensation level by image processing.
Irianto, Jerome; Lee, David A; Knight, Martin M
2014-03-01
The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Color separation in forensic image processing using interactive differential evolution.
Mushtaq, Harris; Rahnamayan, Shahryar; Siddiqi, Areeb
2015-01-01
Color separation is an image processing technique that has often been used in forensic applications to differentiate among variant colors and to remove unwanted image interference. This process can reveal important information such as covered text or fingerprints in forensic investigation procedures. However, several limitations prevent users from selecting the appropriate parameters pertaining to the desired and undesired colors. This study proposes the hybridization of an interactive differential evolution (IDE) and a color separation technique that no longer requires users to guess required control parameters. The IDE algorithm optimizes these parameters in an interactive manner by utilizing human visual judgment to uncover desired objects. A comprehensive experimental verification has been conducted on various sample test images, including heavily obscured texts, texts with subtle color variations, and fingerprint smudges. The advantage of IDE is apparent as it effectively optimizes the color separation parameters at a level indiscernible to the naked eyes. © 2014 American Academy of Forensic Sciences.
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts’ Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2–100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms. PMID:28487831
A New Parallel Approach for Accelerating the GPU-Based Execution of Edge Detection Algorithms.
Emrani, Zahra; Bateni, Soroosh; Rabbani, Hossein
2017-01-01
Real-time image processing is used in a wide variety of applications like those in medical care and industrial processes. This technique in medical care has the ability to display important patient information graphi graphically, which can supplement and help the treatment process. Medical decisions made based on real-time images are more accurate and reliable. According to the recent researches, graphic processing unit (GPU) programming is a useful method for improving the speed and quality of medical image processing and is one of the ways of real-time image processing. Edge detection is an early stage in most of the image processing methods for the extraction of features and object segments from a raw image. The Canny method, Sobel and Prewitt filters, and the Roberts' Cross technique are some examples of edge detection algorithms that are widely used in image processing and machine vision. In this work, these algorithms are implemented using the Compute Unified Device Architecture (CUDA), Open Source Computer Vision (OpenCV), and Matrix Laboratory (MATLAB) platforms. An existing parallel method for Canny approach has been modified further to run in a fully parallel manner. This has been achieved by replacing the breadth- first search procedure with a parallel method. These algorithms have been compared by testing them on a database of optical coherence tomography images. The comparison of results shows that the proposed implementation of the Canny method on GPU using the CUDA platform improves the speed of execution by 2-100× compared to the central processing unit-based implementation using the OpenCV and MATLAB platforms.
Image restoration techniques as applied to Landsat MSS and TM data
Meyer, David
1987-01-01
Two factors are primarily responsible for the loss of image sharpness in processing digital Landsat images. The first factor is inherent in the data because the sensor's optics and electronics, along with other sensor elements, blur and smear the data. Digital image restoration can be used to reduce this degradation. The second factor, which further degrades by blurring or aliasing, is the resampling performed during geometric correction. An image restoration procedure, when used in place of typical resampled techniques, reduces sensor degradation without introducing the artifacts associated with resampling. The EROS Data Center (EDC) has implemented the restoration proceed for Landsat multispectral scanner (MSS) and thematic mapper (TM) data. This capability, developed at the University of Arizona by Dr. Robert Schowengerdt and Lynette Wood, combines restoration and resampling in a single step to produce geometrically corrected MSS and TM imagery. As with resampling, restoration demands a tradeoff be made between aliasing, which occurs when attempting to extract maximum sharpness from an image, and blurring, which reduces the aliasing problem but sacrifices image sharpness. The restoration procedure used at EDC minimizes these artifacts by being adaptive, tailoring the tradeoff to be optimal for individual images.
2011-01-01
When applying echo-Doppler imaging for either clinical or research purposes it is very important to select the most adequate modality/technology and choose the most reliable and reproducible measurements. Quality control is a mainstay to reduce variability among institutions and operators and must be obtained by using appropriate procedures for data acquisition, storage and interpretation of echo-Doppler data. This goal can be achieved by employing an echo core laboratory (ECL), with the responsibility for standardizing image acquisition processes (performed at the peripheral echo-labs) and analysis (by monitoring and optimizing the internal intra- and inter-reader variability of measurements). Accordingly, the Working Group of Echocardiography of the Italian Society of Cardiology decided to design standardized procedures for imaging acquisition in peripheral laboratories and reading procedures and to propose a methodological approach to assess the reproducibility of echo-Doppler parameters of cardiac structure and function by using both standard and advanced technologies. A number of cardiologists experienced in cardiac ultrasound was involved to set up an ECL available for future studies involving complex imaging or including echo-Doppler measures as primary or secondary efficacy or safety end-points. The present manuscript describes the methodology of the procedures (imaging acquisition and measurement reading) and provides the documentation of the work done so far to test the reproducibility of the different echo-Doppler modalities (standard and advanced). These procedures can be suggested for utilization also in non referall echocardiographic laboratories as an "inside" quality check, with the aim at optimizing clinical consistency of echo-Doppler data. PMID:21943283
Video-guided calibration of an augmented reality mobile C-arm.
Chen, Xin; Naik, Hemal; Wang, Lejing; Navab, Nassir; Fallavollita, Pascal
2014-11-01
The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.
Two-dimensional imaging via a narrowband MIMO radar system with two perpendicular linear arrays.
Wang, Dang-wei; Ma, Xiao-yan; Su, Yi
2010-05-01
This paper presents a system model and method for the 2-D imaging application via a narrowband multiple-input multiple-output (MIMO) radar system with two perpendicular linear arrays. Furthermore, the imaging formulation for our method is developed through a Fourier integral processing, and the parameters of antenna array including the cross-range resolution, required size, and sampling interval are also examined. Different from the spatial sequential procedure sampling the scattered echoes during multiple snapshot illuminations in inverse synthetic aperture radar (ISAR) imaging, the proposed method utilizes a spatial parallel procedure to sample the scattered echoes during a single snapshot illumination. Consequently, the complex motion compensation in ISAR imaging can be avoided. Moreover, in our array configuration, multiple narrowband spectrum-shared waveforms coded with orthogonal polyphase sequences are employed. The mainlobes of the compressed echoes from the different filter band could be located in the same range bin, and thus, the range alignment in classical ISAR imaging is not necessary. Numerical simulations based on synthetic data are provided for testing our proposed method.
Restoration of distorted depth maps calculated from stereo sequences
NASA Technical Reports Server (NTRS)
Damour, Kevin; Kaufman, Howard
1991-01-01
A model-based Kalman estimator is developed for spatial-temporal filtering of noise and other degradations in velocity and depth maps derived from image sequences or cinema. As an illustration of the proposed procedures, edge information from image sequences of rigid objects is used in the processing of the velocity maps by selecting from a series of models for directional adaptive filtering. Adaptive filtering then allows for noise reduction while preserving sharpness in the velocity maps. Results from several synthetic and real image sequences are given.
Zhang, T; Godavarthi, C; Chaumet, P C; Maire, G; Giovannini, H; Talneau, A; Prada, C; Sentenac, A; Belkebir, K
2015-02-15
Tomographic diffractive microscopy is a marker-free optical digital imaging technique in which three-dimensional samples are reconstructed from a set of holograms recorded under different angles of incidence. We show experimentally that, by processing the holograms with singular value decomposition, it is possible to image objects in a noisy background that are invisible with classical wide-field microscopy and conventional tomographic reconstruction procedure. The targets can be further characterized with a selective quantitative inversion.
Ground control requirements for precision processing of ERTS images
Burger, Thomas C.
1973-01-01
With the successful flight of the ERTS-1 satellite, orbital height images are available for precision processing into products such as 1:1,000,000-scale photomaps and enlargements up to 1:250,000 scale. In order to maintain positional error below 100 meters, control points for the precision processing must be carefully selected, clearly definitive on photos in both X and Y. Coordinates of selected control points measured on existing ½ and 15-minute standard maps provide sufficient accuracy for any space imaging system thus far defined. This procedure references the points to accepted horizontal and vertical datums. Maps as small as 1:250,000 scale can be used as source material for coordinates, but to maintain the desired accuracy, maps of 1:100,000 and larger scale should be used when available.
Two-dimensional signal processing with application to image restoration
NASA Technical Reports Server (NTRS)
Assefi, T.
1974-01-01
A recursive technique for modeling and estimating a two-dimensional signal contaminated by noise is presented. A two-dimensional signal is assumed to be an undistorted picture, where the noise introduces the distortion. Both the signal and the noise are assumed to be wide-sense stationary processes with known statistics. Thus, to estimate the two-dimensional signal is to enhance the picture. The picture representing the two-dimensional signal is converted to one dimension by scanning the image horizontally one line at a time. The scanner output becomes a nonstationary random process due to the periodic nature of the scanner operation. Procedures to obtain a dynamical model corresponding to the autocorrelation function of the scanner output are derived. Utilizing the model, a discrete Kalman estimator is designed to enhance the image.
High resolution astrophysical observations using speckle imaging
NASA Astrophysics Data System (ADS)
Noyes, R. W.; Nisenson, P.; Papaliolios, C.; Stachnik, R. V.
1986-04-01
This report describes progress under a contract to develop a complete astronomical speckle image reconstruction facility and to apply that facility to the solution of astronomical problems. During the course of the contract we have developed the procedures, algorithms, theory and hardware required to perform that function and have made and interpreted astronomical observations of substantial significance. A principal result of the program was development of a photon-counting camera of innovative design, the PAPA detector. Development of this device was, in our view, essential to making the speckle process into a useful astronomical tool, since the principal impediment to that circumstance in the past was the necessity for application of photon noise compensation procedures which were difficult if not impossible to calibrate. The photon camera made this procedure unnecessary and permitted precision image recovery. The result of this effort and the associated algorithm development was an active program of astronomical observation which included investigations into young stellar objects, supergiant structure and measurements of the helium abundance of the early universe. We have also continued research on recovery of high angular resolution images of the solar surface working with scientists at the Sacramento Peak Observatory in this area.
A hybrid image fusion system for endovascular interventions of peripheral artery disease.
Lalys, Florent; Favre, Ketty; Villena, Alexandre; Durrmann, Vincent; Colleaux, Mathieu; Lucas, Antoine; Kaladji, Adrien
2018-07-01
Interventional endovascular treatment has become the first line of management in the treatment of peripheral artery disease (PAD). However, contrast and radiation exposure continue to limit the feasibility of these procedures. This paper presents a novel hybrid image fusion system for endovascular intervention of PAD. We present two different roadmapping methods from intra- and pre-interventional imaging that can be used either simultaneously or independently, constituting the navigation system. The navigation system is decomposed into several steps that can be entirely integrated within the procedure workflow without modifying it to benefit from the roadmapping. First, a 2D panorama of the entire peripheral artery system is automatically created based on a sequence of stepping fluoroscopic images acquired during the intra-interventional diagnosis phase. During the interventional phase, the live image can be synchronized on the panorama to form the basis of the image fusion system. Two types of augmented information are then integrated. First, an angiography panorama is proposed to avoid contrast media re-injection. Information exploiting the pre-interventional computed tomography angiography (CTA) is also brought to the surgeon by means of semiautomatic 3D/2D registration on the 2D panorama. Each step of the workflow was independently validated. Experiments for both the 2D panorama creation and the synchronization processes showed very accurate results (errors of 1.24 and [Formula: see text] mm, respectively), similarly to the registration on the 3D CTA (errors of [Formula: see text] mm), with minimal user interaction and very low computation time. First results of an on-going clinical study highlighted its major clinical added value on intraoperative parameters. No image fusion system has been proposed yet for endovascular procedures of PAD in lower extremities. More globally, such a navigation system, combining image fusion from different 2D and 3D image sources, is novel in the field of endovascular procedures.
Stereo matching and view interpolation based on image domain triangulation.
Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce
2013-09-01
This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.
An innovative and shared methodology for event reconstruction using images in forensic science.
Milliet, Quentin; Jendly, Manon; Delémont, Olivier
2015-09-01
This study presents an innovative methodology for forensic science image analysis for event reconstruction. The methodology is based on experiences from real cases. It provides real added value to technical guidelines such as standard operating procedures (SOPs) and enriches the community of practices at stake in this field. This bottom-up solution outlines the many facets of analysis and the complexity of the decision-making process. Additionally, the methodology provides a backbone for articulating more detailed and technical procedures and SOPs. It emerged from a grounded theory approach; data from individual and collective interviews with eight Swiss and nine European forensic image analysis experts were collected and interpreted in a continuous, circular and reflexive manner. Throughout the process of conducting interviews and panel discussions, similarities and discrepancies were discussed in detail to provide a comprehensive picture of practices and points of view and to ultimately formalise shared know-how. Our contribution sheds light on the complexity of the choices, actions and interactions along the path of data collection and analysis, enhancing both the researchers' and participants' reflexivity. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Recording 360 Degree Holograms in the Undergraduate Laboratory
ERIC Educational Resources Information Center
Stirn, Bradley A.
1975-01-01
Describes an experiment for recording holograms using a minimum of costly apparatus. Includes a description of apparatus and materials, the procedure for recording the hologram, the processing of the hologram, and the reconstruction of the image. (GS)
Mapping sea ice leads with a coupled numeric/symbolic system
NASA Technical Reports Server (NTRS)
Key, J.; Schweiger, A. J.; Maslanik, J. A.
1990-01-01
A method is presented which facilitates the detection and delineation of leads with single-channel Landsat data by coupling numeric and symbolic procedures. The procedure consists of three steps: (1) using the dynamic threshold method, an image is mapped to a lead/no lead binary image; (2) the likelihood of fragments to be real leads is examined with a set of numeric rules; and (3) pairs of objects are examined geometrically and merged where possible. The processing ends when all fragments are merged and statistical characteristics are determined, and a map of valid lead objects are left which summarizes useful physical in the lead complexes. Direct implementation of domain knowledge and rapid prototyping are two benefits of the rule-based system. The approach is found to be more successfully applied to mid- and high-level processing, and the system can retrieve statistics about sea-ice leads as well as detect the leads.
Time-lapse microscopy and image processing for stem cell research: modeling cell migration
NASA Astrophysics Data System (ADS)
Gustavsson, Tomas; Althoff, Karin; Degerman, Johan; Olsson, Torsten; Thoreson, Ann-Catrin; Thorlin, Thorleif; Eriksson, Peter
2003-05-01
This paper presents hardware and software procedures for automated cell tracking and migration modeling. A time-lapse microscopy system equipped with a computer controllable motorized stage was developed. The performance of this stage was improved by incorporating software algorithms for stage motion displacement compensation and auto focus. The microscope is suitable for in-vitro stem cell studies and allows for multiple cell culture image sequence acquisition. This enables comparative studies concerning rate of cell splits, average cell motion velocity, cell motion as a function of cell sample density and many more. Several cell segmentation procedures are described as well as a cell tracking algorithm. Statistical methods for describing cell migration patterns are presented. In particular, the Hidden Markov Model (HMM) was investigated. Results indicate that if the cell motion can be described as a non-stationary stochastic process, then the HMM can adequately model aspects of its dynamic behavior.
Moore, Sarah; Kailasapathy, Kasipathy; Phillips, Michael; Jones, Mark R
2015-07-01
Microencapsulation is proposed to protect probiotic strains from food processing procedures and to maintain probiotic viability. Little research has described the in situ viability of microencapsulated probiotics. This study successfully developed a real-time viability standard curve for microencapsulated bacteria using confocal microscopy, fluorescent dyes and image analysis software. Copyright © 2015 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Q; Yan, D
2014-06-01
Purpose: Evaluate the accuracy of atlas-based auto segmentation of organs at risk (OARs) on both helical CT (HCT) and cone beam CT (CBCT) images in head and neck (HN) cancer adaptive radiotherapy (ART). Methods: Six HN patients treated in the ART process were included in this study. For each patient, three images were selected: pretreatment planning CT (PreTx-HCT), in treatment CT for replanning (InTx-HCT) and a CBCT acquired in the same day of the InTx-HCT. Three clinical procedures of auto segmentation and deformable registration performed in the ART process were evaluated: a) auto segmentation on PreTx-HCT using multi-subject atlases, b)more » intra-patient propagation of OARs from PreTx-HCT to InTx-HCT using deformable HCT-to-HCT image registration, and c) intra-patient propagation of OARs from PreTx-HCT to CBCT using deformable CBCT-to-HCT image registration. Seven OARs (brainstem, cord, L/R parotid, L/R submandibular gland and mandible) were manually contoured on PreTx-HCT and InTx-HCT for comparison. In addition, manual contours on InTx-CT were copied on the same day CBCT, and a local region rigid body registration was performed accordingly for each individual OAR. For procedures a) and b), auto contours were compared to manual contours, and for c) auto contours were compared to those rigidly transferred contours on CBCT. Dice similarity coefficients (DSC) and mean surface distances of agreement (MSDA) were calculated for evaluation. Results: For procedure a), the mean DSC/MSDA of most OARs are >80%/±2mm. For intra-patient HCT-to-HCT propagation, the Resultimproved to >85%/±1.5mm. Compared to HCT-to-HCT, the mean DSC for HCT-to-CBCT propagation drops ∼2–3% and MSDA increases ∼0.2mm. This Resultindicates that the inferior imaging quality of CBCT seems only degrade auto propagation performance slightly. Conclusion: Auto segmentation and deformable propagation can generate OAR structures on HCT and CBCT images with clinically acceptable accuracy. Therefore, they can be reliably implemented in the clinical HN ART process.« less
TU-C-218-01: Effective Medical Imaging Physics Education.
Sprawls, P
2012-06-01
A practical and applied knowledge of physics and the associated technology is required for the clinically effective and safe use of the various medical imaging modalities. This is needed by all involved in the imaging process, including radiologists, especially residents in training, technologists, and physicists who provide consultation on optimum and safe procedures and as educators for the other imaging professionals. This area of education is undergoing considerable change and evolution for three reasons: 1. Increasing capabilities and complexity of medical imaging technology and procedures, 2.Expanding scope and availability of educational resources, especially on the internet, and 3. A significant increase in our knowledge of the mental learning process and the design of learning activities to optimize effectiveness and efficiency, especially for clinically applied physics. This course will address those three issues by providing guidance on establishing appropriate clinically focused learning outcomes, a review of the brain function for enhancing clinically applied physics, and the design and delivery of effective learning activities beginning with the classroom and continuing through learning physics during the clinical practice of radiology. Characteristics of each type of learning activity will be considered with respect to effectiveness and efficiency in achieving appropriate learning outcomes. A variety of available resources will be identified and demonstrated for use in the different phases of learning process. A major focus is on enhancing the role of the medical physicist in clinical radiology both as a resource and educator with contemporary technology being the tool, but not the teacher. 1. Develop physics learning objectives that will support effective and safe medical imaging procedures. 2. Understand specific brain functions that are involved in learning and applying physics. 3. Describe the characteristics and development of mental knowledge structures for applied clinical physics. 4. List the established levels of learning and associate each with specific functions that can be performed. 5. Analyze the different types of learning activities (classroom, individual study, clinical, etc.) with respect to effectiveness and efficiency. 6. Design and Provide a comprehensive physics education program with each activity optimized with respect to outcomes and available resources. © 2012 American Association of Physicists in Medicine.
Augmented microscopy with near-infrared fluorescence detection
NASA Astrophysics Data System (ADS)
Watson, Jeffrey R.; Martirosyan, Nikolay; Skoch, Jesse; Lemole, G. Michael; Anton, Rein; Romanowski, Marek
2015-03-01
Near-infrared (NIR) fluorescence has become a frequently used intraoperative technique for image-guided surgical interventions. In procedures such as cerebral angiography, surgeons use the optical surgical microscope for the color view of the surgical field, and then switch to an electronic display for the NIR fluorescence images. However, the lack of stereoscopic, real-time, and on-site coregistration adds time and uncertainty to image-guided surgical procedures. To address these limitations, we developed the augmented microscope, whereby the electronically processed NIR fluorescence image is overlaid with the anatomical optical image in real-time within the optical path of the microscope. In vitro, the augmented microscope can detect and display indocyanine green (ICG) concentrations down to 94.5 nM, overlaid with the anatomical color image. We prepared polyacrylamide tissue phantoms with embedded polystyrene beads, yielding scattering properties similar to brain matter. In this model, 194 μM solution of ICG was detectable up to depths of 5 mm. ICG angiography was then performed in anesthetized rats. A dynamic process of ICG distribution in the vascular system overlaid with anatomical color images was observed and recorded. In summary, the augmented microscope demonstrates NIR fluorescence detection with superior real-time coregistration displayed within the ocular of the stereomicroscope. In comparison to other techniques, the augmented microscope retains full stereoscopic vision and optical controls including magnification and focus, camera capture, and multiuser access. Augmented microscopy may find application in surgeries where the use of traditional microscopes can be enhanced by contrast agents and image guided delivery of therapeutics, including oncology, neurosurgery, and ophthalmology.
TESS Data Processing and Quick-look Pipeline
NASA Astrophysics Data System (ADS)
Fausnaugh, Michael; Huang, Xu; Glidden, Ana; Guerrero, Natalia; TESS Science Office
2018-01-01
We describe the data analysis procedures and pipelines for the Transiting Exoplanet Survey Satellite (TESS). We briefly review the processing pipeline developed and implemented by the Science Processing Operations Center (SPOC) at NASA Ames, including pixel/full-frame image calibration, photometric analysis, pre-search data conditioning, transiting planet search, and data validation. We also describe data-quality diagnostic analyses and photometric performance assessment tests. Finally, we detail a "quick-look pipeline" (QLP) that has been developed by the MIT branch of the TESS Science Office (TSO) to provide a fast and adaptable routine to search for planet candidates in the 30 minute full-frame images.
Processing of multispectral thermal IR data for geologic applications
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Madura, D. P.; Soha, J. M.
1979-01-01
Multispectral thermal IR data were acquired with a 24-channel scanner flown in an aircraft over the E. Tintic Utah mining district. These digital image data required extensive computer processing in order to put the information into a format useful for a geologic photointerpreter. Simple enhancement procedures were not sufficient to reveal the total information content because the data were highly correlated in all channels. The data were shown to be dominated by temperature variations across the scene, while the much more subtle spectral variations between the different rock types were of interest. The image processing techniques employed to analyze these data are described.
Satellite land use acquisition and applications to hydrologic planning models
NASA Technical Reports Server (NTRS)
Algazi, V. R.; Suk, M.
1977-01-01
A developing operational procedure for use by the Corps of Engineers in the acquisition of land use information for hydrologic planning purposes was described. The operational conditions preclude the use of dedicated, interactive image processing facilities. Given the constraints, an approach to land use classification based on clustering seems promising and was explored in detail. The procedure is outlined and examples of application to two watersheds given.
NASA Technical Reports Server (NTRS)
Rowan, L. C.; Abrams, M. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Positive findings of earlier evaluations of the color-ratio compositing technique for mapping limonitic altered rocks in south-central Nevada are confirmed, but important limitations in the approach used are pointed out. These limitations arise from environmental, geologic, and image processing factors. The greater vegetation density in the East Tintic Mountains required several modifications in procedures to improve the overall mapping accuracy of the CRC approach. Large format ratio images provide better internal registration of the diazo films and avoids the problems associated with magnifications required in the original procedure. Use of the Linoscan 204 color recognition scanner permits accurate consistent extraction of the green pixels representing limonitic bedrock maps that can be used for mapping at large scales as well as for small scale reconnaissance.
Deformable image registration for multimodal lung-cancer staging
NASA Astrophysics Data System (ADS)
Cheirsilp, Ronnarit; Zang, Xiaonan; Bascom, Rebecca; Allen, Thomas W.; Mahraj, Rickhesvar P. M.; Higgins, William E.
2016-03-01
Positron emission tomography (PET) and X-ray computed tomography (CT) serve as major diagnostic imaging modalities in the lung-cancer staging process. Modern scanners provide co-registered whole-body PET/CT studies, collected while the patient breathes freely, and high-resolution chest CT scans, collected under a brief patient breath hold. Unfortunately, no method exists for registering a PET/CT study into the space of a high-resolution chest CT scan. If this could be done, vital diagnostic information offered by the PET/CT study could be brought seamlessly into the procedure plan used during live cancer-staging bronchoscopy. We propose a method for the deformable registration of whole-body PET/CT data into the space of a high-resolution chest CT study. We then demonstrate its potential for procedure planning and subsequent use in multimodal image-guided bronchoscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Washington University in St Louis, St Louis, MO; Li, H. Harlod
Purpose: In RT patient setup 2D images, tissues often cannot be seen well due to the lack of image contrast. Contrast enhancement features provided by image reviewing software, e.g. Mosaiq and ARIA, require manual selection of the image processing filters and parameters thus inefficient and cannot be automated. In this work, we developed a novel method to automatically enhance the 2D RT image contrast to allow automatic verification of patient daily setups as a prerequisite step of automatic patient safety assurance. Methods: The new method is based on contrast limited adaptive histogram equalization (CLAHE) and high-pass filtering algorithms. The mostmore » important innovation is to automatically select the optimal parameters by optimizing the image contrast. The image processing procedure includes the following steps: 1) background and noise removal, 2) hi-pass filtering by subtracting the Gaussian smoothed Result, and 3) histogram equalization using CLAHE algorithm. Three parameters were determined through an iterative optimization which was based on the interior-point constrained optimization algorithm: the Gaussian smoothing weighting factor, the CLAHE algorithm block size and clip limiting parameters. The goal of the optimization is to maximize the entropy of the processed Result. Results: A total 42 RT images were processed. The results were visually evaluated by RT physicians and physicists. About 48% of the images processed by the new method were ranked as excellent. In comparison, only 29% and 18% of the images processed by the basic CLAHE algorithm and by the basic window level adjustment process, were ranked as excellent. Conclusion: This new image contrast enhancement method is robust and automatic, and is able to significantly outperform the basic CLAHE algorithm and the manual window-level adjustment process that are currently used in clinical 2D image review software tools.« less
Medical three-dimensional printing opens up new opportunities in cardiology and cardiac surgery.
Bartel, Thomas; Rivard, Andrew; Jimenez, Alejandro; Mestres, Carlos A; Müller, Silvana
2018-04-14
Advanced percutaneous and surgical procedures in structural and congenital heart disease require precise pre-procedural planning and continuous quality control. Although current imaging modalities and post-processing software assists with peri-procedural guidance, their capabilities for spatial conceptualization remain limited in two- and three-dimensional representations. In contrast, 3D printing offers not only improved visualization for procedural planning, but provides substantial information on the accuracy of surgical reconstruction and device implantations. Peri-procedural 3D printing has the potential to set standards of quality assurance and individualized healthcare in cardiovascular medicine and surgery. Nowadays, a variety of clinical applications are available showing how accurate 3D computer reformatting and physical 3D printouts of native anatomy, embedded pathology, and implants are and how they may assist in the development of innovative therapies. Accurate imaging of pathology including target region for intervention, its anatomic features and spatial relation to the surrounding structures is critical for selecting optimal approach and evaluation of procedural results. This review describes clinical applications of 3D printing, outlines current limitations, and highlights future implications for quality control, advanced medical education and training.
A New Color Image Encryption Scheme Using CML and a Fractional-Order Chaotic System
Wu, Xiangjun; Li, Yang; Kurths, Jürgen
2015-01-01
The chaos-based image cryptosystems have been widely investigated in recent years to provide real-time encryption and transmission. In this paper, a novel color image encryption algorithm by using coupled-map lattices (CML) and a fractional-order chaotic system is proposed to enhance the security and robustness of the encryption algorithms with a permutation-diffusion structure. To make the encryption procedure more confusing and complex, an image division-shuffling process is put forward, where the plain-image is first divided into four sub-images, and then the position of the pixels in the whole image is shuffled. In order to generate initial conditions and parameters of two chaotic systems, a 280-bit long external secret key is employed. The key space analysis, various statistical analysis, information entropy analysis, differential analysis and key sensitivity analysis are introduced to test the security of the new image encryption algorithm. The cryptosystem speed is analyzed and tested as well. Experimental results confirm that, in comparison to other image encryption schemes, the new algorithm has higher security and is fast for practical image encryption. Moreover, an extensive tolerance analysis of some common image processing operations such as noise adding, cropping, JPEG compression, rotation, brightening and darkening, has been performed on the proposed image encryption technique. Corresponding results reveal that the proposed image encryption method has good robustness against some image processing operations and geometric attacks. PMID:25826602
Dynamic heart phantom with functional mitral and aortic valves
NASA Astrophysics Data System (ADS)
Vannelli, Claire; Moore, John; McLeod, Jonathan; Ceh, Dennis; Peters, Terry
2015-03-01
Cardiac valvular stenosis, prolapse and regurgitation are increasingly common conditions, particularly in an elderly population with limited potential for on-pump cardiac surgery. NeoChord©, MitraClipand numerous stent-based transcatheter aortic valve implantation (TAVI) devices provide an alternative to intrusive cardiac operations; performed while the heart is beating, these procedures require surgeons and cardiologists to learn new image-guidance based techniques. Developing these visual aids and protocols is a challenging task that benefits from sophisticated simulators. Existing models lack features needed to simulate off-pump valvular procedures: functional, dynamic valves, apical and vascular access, and user flexibility for different activation patterns such as variable heart rates and rapid pacing. We present a left ventricle phantom with these characteristics. The phantom can be used to simulate valvular repair and replacement procedures with magnetic tracking, augmented reality, fluoroscopy and ultrasound guidance. This tool serves as a platform to develop image-guidance and image processing techniques required for a range of minimally invasive cardiac interventions. The phantom mimics in vivo mitral and aortic valve motion, permitting realistic ultrasound images of these components to be acquired. It also has a physiological realistic left ventricular ejection fraction of 50%. Given its realistic imaging properties and non-biodegradable composition—silicone for tissue, water for blood—the system promises to reduce the number of animal trials required to develop image guidance applications for valvular repair and replacement. The phantom has been used in validation studies for both TAVI image-guidance techniques1, and image-based mitral valve tracking algorithms2.
The Spectral Image Processing System (SIPS): Software for integrated analysis of AVIRIS data
NASA Technical Reports Server (NTRS)
Kruse, F. A.; Lefkoff, A. B.; Boardman, J. W.; Heidebrecht, K. B.; Shapiro, A. T.; Barloon, P. J.; Goetz, A. F. H.
1992-01-01
The Spectral Image Processing System (SIPS) is a software package developed by the Center for the Study of Earth from Space (CSES) at the University of Colorado, Boulder, in response to a perceived need to provide integrated tools for analysis of imaging spectrometer data both spectrally and spatially. SIPS was specifically designed to deal with data from the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and the High Resolution Imaging Spectrometer (HIRIS), but was tested with other datasets including the Geophysical and Environmental Research Imaging Spectrometer (GERIS), GEOSCAN images, and Landsat TM. SIPS was developed using the 'Interactive Data Language' (IDL). It takes advantage of high speed disk access and fast processors running under the UNIX operating system to provide rapid analysis of entire imaging spectrometer datasets. SIPS allows analysis of single or multiple imaging spectrometer data segments at full spatial and spectral resolution. It also allows visualization and interactive analysis of image cubes derived from quantitative analysis procedures such as absorption band characterization and spectral unmixing. SIPS consists of three modules: SIPS Utilities, SIPS_View, and SIPS Analysis. SIPS version 1.1 is described below.
Special Software for Planetary Image Processing and Research
NASA Astrophysics Data System (ADS)
Zubarev, A. E.; Nadezhdina, I. E.; Kozlova, N. A.; Brusnikin, E. S.; Karachevtseva, I. P.
2016-06-01
The special modules of photogrammetric processing of remote sensing data that provide the opportunity to effectively organize and optimize the planetary studies were developed. As basic application the commercial software package PHOTOMOD™ is used. Special modules were created to perform various types of data processing: calculation of preliminary navigation parameters, calculation of shape parameters of celestial body, global view image orthorectification, estimation of Sun illumination and Earth visibilities from planetary surface. For photogrammetric processing the different types of data have been used, including images of the Moon, Mars, Mercury, Phobos, Galilean satellites and Enceladus obtained by frame or push-broom cameras. We used modern planetary data and images that were taken over the years, shooting from orbit flight path with various illumination and resolution as well as obtained by planetary rovers from surface. Planetary data image processing is a complex task, and as usual it can take from few months to years. We present our efficient pipeline procedure that provides the possibilities to obtain different data products and supports a long way from planetary images to celestial body maps. The obtained data - new three-dimensional control point networks, elevation models, orthomosaics - provided accurate maps production: a new Phobos atlas (Karachevtseva et al., 2015) and various thematic maps that derived from studies of planetary surface (Karachevtseva et al., 2016a).
Rodríguez, Jaime; Martín, María T; Herráez, José; Arias, Pedro
2008-12-10
Photogrammetry is a science with many fields of application in civil engineering where image processing is used for different purposes. In most cases, the use of multiple images simultaneously for the reconstruction of 3D scenes is commonly used. However, the use of isolated images is becoming more and more frequent, for which it is necessary to calculate the orientation of the image with respect to the object space (exterior orientation), which is usually made through three rotations through known points in the object space (Euler angles). We describe the resolution of this problem by means of a single rotation through the vanishing line of the image space and completely external to the object, to be more precise, without any contact with it. The results obtained appear to be optimal, and the procedure is simple and of great utility, since no points over the object are required, which is very useful in situations where access is difficult.
Complex noise suppression using a sparse representation and 3D filtering of images
NASA Astrophysics Data System (ADS)
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Palacios-Enriquez, A.
2017-08-01
A novel method for the filtering of images corrupted by complex noise composed of randomly distributed impulses and additive Gaussian noise has been substantiated for the first time. The method consists of three main stages: the detection and filtering of pixels corrupted by impulsive noise, the subsequent image processing to suppress the additive noise based on 3D filtering and a sparse representation of signals in a basis of wavelets, and the concluding image processing procedure to clean the final image of the errors emerged at the previous stages. A physical interpretation of the filtering method under complex noise conditions is given. A filtering block diagram has been developed in accordance with the novel approach. Simulations of the novel image filtering method have shown an advantage of the proposed filtering scheme in terms of generally recognized criteria, such as the structural similarity index measure and the peak signal-to-noise ratio, and when visually comparing the filtered images.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
Real-time stereo generation for surgical vision during minimal invasive robotic surgery
NASA Astrophysics Data System (ADS)
Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod
2016-03-01
This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.
A simple method for panretinal imaging with the slit lamp.
Gellrich, Marcus-Matthias
2016-12-01
Slit lamp biomicroscopy of the retina with a convex lens is a key procedure in clinical practice. The methods presented enable ophthalmologists to adequately image large and peripheral parts of the fundus using a video-slit lamp and freely available stitching software. A routine examination of the fundus with a slit lamp and a +90 D lens is recorded on a video film. Later, sufficiently sharp still images are identified on the video sequence. These still images are imported into a freely available image-processing program (Hugin, for stitching mosaics together digitally) and corresponding points are marked on adjacent still images with some overlap. Using the digital stitching program Hugin panoramic overviews of the retina can be built which can extend to the equator. This allows to image diseases involving the whole retina or its periphery by performing a structured fundus examination with a video-slit lamp. Similar images with a video-slit lamp based on a fundus examination through a hand-held non-contact lens have not been demonstrated before. The methods presented enable those ophthalmologists without high-end imaging equipment to monitor pathological fundus findings. The suggested procedure might even be interesting for retinological departments if peripheral findings are to be documented which might be difficult with fundus cameras.
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
Brahme, Anders; Nyman, Peter; Skatt, Björn
2008-05-01
A four-dimensional (4D) laser camera (LC) has been developed for accurate patient imaging in diagnostic and therapeutic radiology. A complementary metal-oxide semiconductor camera images the intersection of a scanned fan shaped laser beam with the surface of the patient and allows real time recording of movements in a three-dimensional (3D) or four-dimensional (4D) format (3D +time). The LC system was first designed as an accurate patient setup tool during diagnostic and therapeutic applications but was found to be of much wider applicability as a general 4D photon "tag" for the surface of the patient in different clinical procedures. It is presently used as a 3D or 4D optical benchmark or tag for accurate delineation of the patient surface as demonstrated for patient auto setup, breathing and heart motion detection. Furthermore, its future potential applications in gating, adaptive therapy, 3D or 4D image fusion between most imaging modalities and image processing are discussed. It is shown that the LC system has a geometrical resolution of about 0, 1 mm and that the rigid body repositioning accuracy is about 0, 5 mm below 20 mm displacements, 1 mm below 40 mm and better than 2 mm at 70 mm. This indicates a slight need for repeated repositioning when the initial error is larger than about 50 mm. The positioning accuracy with standard patient setup procedures for prostate cancer at Karolinska was found to be about 5-6 mm when independently measured using the LC system. The system was found valuable for positron emission tomography-computed tomography (PET-CT) in vivo tumor and dose delivery imaging where it potentially may allow effective correction for breathing artifacts in 4D PET-CT and image fusion with lymph node atlases for accurate target volume definition in oncology. With a LC system in all imaging and radiation therapy rooms, auto setup during repeated diagnostic and therapeutic procedures may save around 5 min per session, increase accuracy and allow efficient image fusion between all imaging modalities employed.
Photogrammetric 3d Building Reconstruction from Thermal Images
NASA Astrophysics Data System (ADS)
Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.
2017-08-01
This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.
NASA Astrophysics Data System (ADS)
Tokareva, Victoria
2018-04-01
New generation medicine demands a better quality of analysis increasing the amount of data collected during checkups, and simultaneously decreasing the invasiveness of a procedure. Thus it becomes urgent not only to develop advanced modern hardware, but also to implement special software infrastructure for using it in everyday clinical practice, so-called Picture Archiving and Communication Systems (PACS). Developing distributed PACS is a challenging task for nowadays medical informatics. The paper discusses the architecture of distributed PACS server for processing large high-quality medical images, with respect to technical specifications of modern medical imaging hardware, as well as international standards in medical imaging software. The MapReduce paradigm is proposed for image reconstruction by server, and the details of utilizing the Hadoop framework for this task are being discussed in order to provide the design of distributed PACS as ergonomic and adapted to the needs of end users as possible.
Heuristic Enhancement of Magneto-Optical Images for NDE
NASA Astrophysics Data System (ADS)
Cacciola, Matteo; Megali, Giuseppe; Pellicanò, Diego; Calcagno, Salvatore; Versaci, Mario; Morabito, FrancescoCarlo
2010-12-01
The quality of measurements in nondestructive testing and evaluation plays a key role in assessing the reliability of different inspection techniques. Each different technique, like the magneto-optic imaging here treated, is affected by some special types of noise which are related to the specific device used for their acquisition. Therefore, the design of even more accurate image processing is often required by relevant applications, for instance, in implementing integrated solutions for flaw detection and characterization. The aim of this paper is to propose a preprocessing procedure based on independent component analysis (ICA) to ease the detection of rivets and/or flaws in the specimens under test. A comparison of the proposed approach with some other advanced image processing methodologies used for denoising magneto-optic images (MOIs) is carried out, in order to show advantages and weakness of ICA in improving the accuracy and performance of the rivets/flaw detection.
Software and Algorithms for Biomedical Image Data Processing and Visualization
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Lambert, James; Lam, Raymond
2004-01-01
A new software equipped with novel image processing algorithms and graphical-user-interface (GUI) tools has been designed for automated analysis and processing of large amounts of biomedical image data. The software, called PlaqTrak, has been specifically used for analysis of plaque on teeth of patients. New algorithms have been developed and implemented to segment teeth of interest from surrounding gum, and a real-time image-based morphing procedure is used to automatically overlay a grid onto each segmented tooth. Pattern recognition methods are used to classify plaque from surrounding gum and enamel, while ignoring glare effects due to the reflection of camera light and ambient light from enamel regions. The PlaqTrak system integrates these components into a single software suite with an easy-to-use GUI (see Figure 1) that allows users to do an end-to-end run of a patient s record, including tooth segmentation of all teeth, grid morphing of each segmented tooth, and plaque classification of each tooth image. The automated and accurate processing of the captured images to segment each tooth [see Figure 2(a)] and then detect plaque on a tooth-by-tooth basis is a critical component of the PlaqTrak system to do clinical trials and analysis with minimal human intervention. These features offer distinct advantages over other competing systems that analyze groups of teeth or synthetic teeth. PlaqTrak divides each segmented tooth into eight regions using an advanced graphics morphing procedure [see results on a chipped tooth in Figure 2(b)], and a pattern recognition classifier is then used to locate plaque [red regions in Figure 2(d)] and enamel regions. The morphing allows analysis within regions of teeth, thereby facilitating detailed statistical analysis such as the amount of plaque present on the biting surfaces on teeth. This software system is applicable to a host of biomedical applications, such as cell analysis and life detection, or robotic applications, such as product inspection or assembly of parts in space and industry.
Image processing techniques for noise removal, enhancement and segmentation of cartilage OCT images
NASA Astrophysics Data System (ADS)
Rogowska, Jadwiga; Brezinski, Mark E.
2002-02-01
Osteoarthritis, whose hallmark is the progressive loss of joint cartilage, is a major cause of morbidity worldwide. Recently, optical coherence tomography (OCT) has demonstrated considerable promise for the assessment of articular cartilage. Among the most important parameters to be assessed is cartilage width. However, detection of the bone cartilage interface is critical for the assessment of cartilage width. At present, the quantitative evaluations of cartilage thickness are being done using manual tracing of cartilage-bone borders. Since data is being obtained near video rate with OCT, automated identification of the bone-cartilage interface is critical. In order to automate the process of boundary detection on OCT images, there is a need for developing new image processing techniques. In this paper we describe the image processing techniques for speckle removal, image enhancement and segmentation of cartilage OCT images. In particular, this paper focuses on rabbit cartilage since this is an important animal model for testing both chondroprotective agents and cartilage repair techniques. In this study, a variety of techniques were examined. Ultimately, by combining an adaptive filtering technique with edge detection (vertical gradient, Sobel edge detection), cartilage edges can be detected. The procedure requires several steps and can be automated. Once the cartilage edges are outlined, the cartilage thickness can be measured.
Digital analysis of wind tunnel imagery to measure fluid thickness
NASA Technical Reports Server (NTRS)
Easton, Roger L., Jr.; Enge, James
1992-01-01
Documented here are the procedure and results obtained from the application of digital image processing techniques to the problem of measuring the thickness of a deicing fluid on a model airfoil during simulated takeoffs. The fluid contained a fluorescent dye and the images were recorded under flash illumination on photographic film. The films were digitized and analyzed on a personal computer to obtain maps of the fluid thickness.
Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment
Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T.; Alcázar-Ramírez, José D.; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A.
2015-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI. PMID:26664493
Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment.
Espinoza-Cuadros, Fernando; Fernández-Pozo, Rubén; Toledano, Doroteo T; Alcázar-Ramírez, José D; López-Gonzalo, Eduardo; Hernández-Gómez, Luis A
2015-01-01
Obstructive sleep apnea (OSA) is a common sleep disorder characterized by recurring breathing pauses during sleep caused by a blockage of the upper airway (UA). OSA is generally diagnosed through a costly procedure requiring an overnight stay of the patient at the hospital. This has led to proposing less costly procedures based on the analysis of patients' facial images and voice recordings to help in OSA detection and severity assessment. In this paper we investigate the use of both image and speech processing to estimate the apnea-hypopnea index, AHI (which describes the severity of the condition), over a population of 285 male Spanish subjects suspected to suffer from OSA and referred to a Sleep Disorders Unit. Photographs and voice recordings were collected in a supervised but not highly controlled way trying to test a scenario close to an OSA assessment application running on a mobile device (i.e., smartphones or tablets). Spectral information in speech utterances is modeled by a state-of-the-art low-dimensional acoustic representation, called i-vector. A set of local craniofacial features related to OSA are extracted from images after detecting facial landmarks using Active Appearance Models (AAMs). Support vector regression (SVR) is applied on facial features and i-vectors to estimate the AHI.
Effect of the image resolution on the statistical descriptors of heterogeneous media.
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
Effect of the image resolution on the statistical descriptors of heterogeneous media
NASA Astrophysics Data System (ADS)
Ledesma-Alonso, René; Barbosa, Romeli; Ortegón, Jaime
2018-02-01
The characterization and reconstruction of heterogeneous materials, such as porous media and electrode materials, involve the application of image processing methods to data acquired by scanning electron microscopy or other microscopy techniques. Among them, binarization and decimation are critical in order to compute the correlation functions that characterize the microstructure of the above-mentioned materials. In this study, we present a theoretical analysis of the effects of the image-size reduction, due to the progressive and sequential decimation of the original image. Three different decimation procedures (random, bilinear, and bicubic) were implemented and their consequences on the discrete correlation functions (two-point, line-path, and pore-size distribution) and the coarseness (derived from the local volume fraction) are reported and analyzed. The chosen statistical descriptors (correlation functions and coarseness) are typically employed to characterize and reconstruct heterogeneous materials. A normalization for each of the correlation functions has been performed. When the loss of statistical information has not been significant for a decimated image, its normalized correlation function is forecast by the trend of the original image (reference function). In contrast, when the decimated image does not hold statistical evidence of the original one, the normalized correlation function diverts from the reference function. Moreover, the equally weighted sum of the average of the squared difference, between the discrete correlation functions of the decimated images and the reference functions, leads to a definition of an overall error. During the first stages of the gradual decimation, the error remains relatively small and independent of the decimation procedure. Above a threshold defined by the correlation length of the reference function, the error becomes a function of the number of decimation steps. At this stage, some statistical information is lost and the error becomes dependent on the decimation procedure. These results may help us to restrict the amount of information that one can afford to lose during a decimation process, in order to reduce the computational and memory cost, when one aims to diminish the time consumed by a characterization or reconstruction technique, yet maintaining the statistical quality of the digitized sample.
NASA Astrophysics Data System (ADS)
Wang, N.; Yang, R.
2018-04-01
Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2 m, 1 m and 2.1 m and the multispectral images (MS) resolution are 8 m, 4 m, 5.8 m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.
Assistive technology for ultrasound-guided central venous catheter placement.
Ikhsan, Mohammad; Tan, Kok Kiong; Putra, Andi Sudjana
2018-01-01
This study evaluated the existing technology used to improve the safety and ease of ultrasound-guided central venous catheterization. Electronic database searches were conducted in Scopus, IEEE, Google Patents, and relevant conference databases (SPIE, MICCAI, and IEEE conferences) for related articles on assistive technology for ultrasound-guided central venous catheterization. A total of 89 articles were examined and pointed to several fields that are currently the focus of improvements to ultrasound-guided procedures. These include improving needle visualization, needle guides and localization technology, image processing algorithms to enhance and segment important features within the ultrasound image, robotic assistance using probe-mounted manipulators, and improving procedure ergonomics through in situ projections of important information. Probe-mounted robotic manipulators provide a promising avenue for assistive technology developed for freehand ultrasound-guided percutaneous procedures. However, there is currently a lack of clinical trials to validate the effectiveness of these devices.
Recent developments in computer vision-based analytical chemistry: A tutorial review.
Capitán-Vallvey, Luis Fermín; López-Ruiz, Nuria; Martínez-Olmos, Antonio; Erenas, Miguel M; Palma, Alberto J
2015-10-29
Chemical analysis based on colour changes recorded with imaging devices is gaining increasing interest. This is due to its several significant advantages, such as simplicity of use, and the fact that it is easily combinable with portable and widely distributed imaging devices, resulting in friendly analytical procedures in many areas that demand out-of-lab applications for in situ and real-time monitoring. This tutorial review covers computer vision-based analytical (CVAC) procedures and systems from 2005 to 2015, a period of time when 87.5% of the papers on this topic were published. The background regarding colour spaces and recent analytical system architectures of interest in analytical chemistry is presented in the form of a tutorial. Moreover, issues regarding images, such as the influence of illuminants, and the most relevant techniques for processing and analysing digital images are addressed. Some of the most relevant applications are then detailed, highlighting their main characteristics. Finally, our opinion about future perspectives is discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
High-performance computing in image registration
NASA Astrophysics Data System (ADS)
Zanin, Michele; Remondino, Fabio; Dalla Mura, Mauro
2012-10-01
Thanks to the recent technological advances, a large variety of image data is at our disposal with variable geometric, radiometric and temporal resolution. In many applications the processing of such images needs high performance computing techniques in order to deliver timely responses e.g. for rapid decisions or real-time actions. Thus, parallel or distributed computing methods, Digital Signal Processor (DSP) architectures, Graphical Processing Unit (GPU) programming and Field-Programmable Gate Array (FPGA) devices have become essential tools for the challenging issue of processing large amount of geo-data. The article focuses on the processing and registration of large datasets of terrestrial and aerial images for 3D reconstruction, diagnostic purposes and monitoring of the environment. For the image alignment procedure, sets of corresponding feature points need to be automatically extracted in order to successively compute the geometric transformation that aligns the data. The feature extraction and matching are ones of the most computationally demanding operations in the processing chain thus, a great degree of automation and speed is mandatory. The details of the implemented operations (named LARES) exploiting parallel architectures and GPU are thus presented. The innovative aspects of the implementation are (i) the effectiveness on a large variety of unorganized and complex datasets, (ii) capability to work with high-resolution images and (iii) the speed of the computations. Examples and comparisons with standard CPU processing are also reported and commented.
MO-DE-207-04: Imaging educational program on solutions to common pediatric imaging challenges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnamurthy, R.
This imaging educational program will focus on solutions to common pediatric imaging challenges. The speakers will present collective knowledge on best practices in pediatric imaging from their experience at dedicated children’s hospitals. The educational program will begin with a detailed discussion of the optimal configuration of fluoroscopes for general pediatric procedures. Following this introduction will be a focused discussion on the utility of Dual Energy CT for imaging children. The third lecture will address the substantial challenge of obtaining consistent image post -processing in pediatric digital radiography. The fourth and final lecture will address best practices in pediatric MRI includingmore » a discussion of ancillary methods to reduce sedation and anesthesia rates. Learning Objectives: To learn techniques for optimizing radiation dose and image quality in pediatric fluoroscopy To become familiar with the unique challenges and applications of Dual Energy CT in pediatric imaging To learn solutions for consistent post-processing quality in pediatric digital radiography To understand the key components of an effective MRI safety and quality program for the pediatric practice.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kohlbrenner, R; Kolli, KP; Taylor, A
2014-06-01
Purpose: To quantify the patient radiation dose reduction achieved during transarterial chemoembolization (TACE) procedures performed in a body interventional radiology suite equipped with the Philips Allura Clarity imaging acquisition and processing platform, compared to TACE procedures performed in the same suite equipped with the Philips Allura Xper platform. Methods: Total fluoroscopy time, cumulative dose area product, and cumulative air kerma were recorded for the first 25 TACE procedures performed to treat hepatocellular carcinoma (HCC) in a Philips body interventional radiology suite equipped with Philips Allura Clarity. The same data were collected for the prior 85 TACE procedures performed to treatmore » HCC in the same suite equipped with Philips Allura Xper. Mean values from these cohorts were compared using two-tailed t tests. Results: Following installation of the Philips Allura Clarity platform, a 42.8% reduction in mean cumulative dose area product (3033.2 versus 1733.6 mGycm∧2, p < 0.0001) and a 31.2% reduction in mean cumulative air kerma (1445.4 versus 994.2 mGy, p < 0.001) was achieved compared to similar procedures performed in the same suite equipped with the Philips Allura Xper platform. Mean total fluoroscopy time was not significantly different between the two cohorts (1679.3 versus 1791.3 seconds, p = 0.41). Conclusion: This study demonstrates a significant patient radiation dose reduction during TACE procedures performed to treat HCC after a body interventional radiology suite was converted to the Philips Allura Clarity platform from the Philips Allura Xper platform. Future work will focus on evaluation of patient dose reduction in a larger cohort of patients across a broader range of procedures and in specific populations, including obese patients and pediatric patients, and comparison of image quality between the two platforms. Funding for this study was provided by Philips Healthcare, with 5% salary support provided to authors K. Pallav Kolli and Robert G. Gould for time devoted to the study. Data acquisition and analysis was performed by the authors independent of the funding source.« less
NASA Astrophysics Data System (ADS)
Sahoo, Sujit Kumar; Tang, Dongliang; Dang, Cuong
2018-02-01
Large field of view multispectral imaging through scattering medium is a fundamental quest in optics community. It has gained special attention from researchers in recent years for its wide range of potential applications. However, the main bottlenecks of the current imaging systems are the requirements on specific illumination, poor image quality and limited field of view. In this work, we demonstrated a single-shot high-resolution colour-imaging through scattering media using a monochromatic camera. This novel imaging technique is enabled by the spatial, spectral decorrelation property and the optical memory effect of the scattering media. Moreover the use of deconvolution image processing further annihilate above-mentioned drawbacks arise due iterative refocusing, scanning or phase retrieval procedures.
Neural Implants, Packaging for Biocompatible Implants, and Improving Fabricated Capacitors
NASA Astrophysics Data System (ADS)
Agger, Elizabeth Rose
We have completed the circuit design and packaging procedure for an NIH-funded neural implant, called a MOTE (Microscale Optoelectronically Transduced Electrode). Neural recording implants for mice have greatly advanced neuroscience, but they are often damaging and limited in their recording location. This project will result in free-floating implants that cause less damage, provide rapid electronic recording, and increase range of recording across the cortex. A low-power silicon IC containing amplification and digitization sub-circuits is powered by a dual-function gallium arsenide photovoltaic and LED. Through thin film deposition, photolithography, and chemical and physical etching, the Molnar Group and the McEuen Group (Applied and Engineering Physics department) will package the IC and LED into a biocompatible implant approximately 100microm3. The IC and LED are complete and we have begun refining this packaging procedure in the Cornell NanoScale Science & Technology Facility. ICs with 3D time-resolved imaging capabilities can image microorganisms and other biological samples given proper packaging. A portable, flat, easily manufactured package would enable scientists to place biological samples on slides directly above the Molnar group's imaging chip. We have developed a packaging procedure using laser cutting, photolithography, epoxies, and metal deposition. Using a flip-chip method, we verified the process by aligning and adhering a sample chip to a holder wafer. In the CNF, we have worked on a long-term metal-insulator-metal (MIM) capacitor characterization project. Former Fellow and continuing CNF user Kwame Amponsah developed the original procedure for the capacitor fabrication, and another former fellow, Jonilyn Longenecker, revised the procedure and began the arduous process of characterization. MIM caps are useful to clean room users as testing devices to verify electronic characteristics of their active circuitry. This project's objective is to determine differences in current-voltage (IV) and capacitor-voltage (CV) relationships across variations in capacitor size and dielectric type. This effort requires an approximately 20-step process repeated for two-to-six varieties (dependent on temperature and thermal versus plasma options) of the following dielectrics: HfO2, SiO2, Al2O3, TaOx, and TiO2.
Mannoji, Chikato; Murakami, Masazumi; Kinoshita, Tomoaki; Hirayama, Jiro; Miyashita, Tomohiro; Eguchi, Yawara; Yamazaki, Masashi; Suzuki, Takane; Aramomi, Masaaki; Ota, Mitsutoshi; Maki, Satoshi; Takahashi, Kazuhisa; Furuya, Takeo
2016-01-01
Study Design Retrospective case-control study. Purpose To determine whether kissing spine is a risk factor for recurrence of sciatica after lumbar posterior decompression using a spinous process floating approach. Overview of Literature Kissing spine is defined by apposition and sclerotic change of the facing spinous processes as shown in X-ray images, and is often accompanied by marked disc degeneration and decrement of disc height. If kissing spine significantly contributes to weight bearing and the stability of the lumbar spine, trauma to the spinous process might induce a breakdown of lumbar spine stability after posterior decompression surgery in cases of kissing spine. Methods The present study included 161 patients who had undergone posterior decompression surgery for lumbar canal stenosis using a spinous process floating approaches. We defined recurrence of sciatica as that resolved after initial surgery and then recurred. Kissing spine was defined as sclerotic change and the apposition of the spinous process in a plain radiogram. Preoperative foraminal stenosis was determined by the decrease of perineural fat intensity detected by parasagittal T1-weighted magnetic resonance imaging. Preoperative percentage slip, segmental range of motion, and segmental scoliosis were analyzed in preoperative radiographs. Univariate analysis followed by stepwise logistic regression analysis determined factors independently associated with recurrence of sciatica. Results Stepwise logistic regression revealed kissing spine (p=0.024; odds ratio, 3.80) and foraminal stenosis (p<0.01; odds ratio, 17.89) as independent risk factors for the recurrence of sciatica after posterior lumbar spinal decompression with spinous process floating procedures for lumbar spinal canal stenosis. Conclusions When a patient shows kissing spine and concomitant subclinical foraminal stenosis at the affected level, we should sufficiently discuss the selection of an appropriate surgical procedure. PMID:27994785
Koda, Masao; Mannoji, Chikato; Murakami, Masazumi; Kinoshita, Tomoaki; Hirayama, Jiro; Miyashita, Tomohiro; Eguchi, Yawara; Yamazaki, Masashi; Suzuki, Takane; Aramomi, Masaaki; Ota, Mitsutoshi; Maki, Satoshi; Takahashi, Kazuhisa; Furuya, Takeo
2016-12-01
Retrospective case-control study. To determine whether kissing spine is a risk factor for recurrence of sciatica after lumbar posterior decompression using a spinous process floating approach. Kissing spine is defined by apposition and sclerotic change of the facing spinous processes as shown in X-ray images, and is often accompanied by marked disc degeneration and decrement of disc height. If kissing spine significantly contributes to weight bearing and the stability of the lumbar spine, trauma to the spinous process might induce a breakdown of lumbar spine stability after posterior decompression surgery in cases of kissing spine. The present study included 161 patients who had undergone posterior decompression surgery for lumbar canal stenosis using a spinous process floating approaches. We defined recurrence of sciatica as that resolved after initial surgery and then recurred. Kissing spine was defined as sclerotic change and the apposition of the spinous process in a plain radiogram. Preoperative foraminal stenosis was determined by the decrease of perineural fat intensity detected by parasagittal T1-weighted magnetic resonance imaging. Preoperative percentage slip, segmental range of motion, and segmental scoliosis were analyzed in preoperative radiographs. Univariate analysis followed by stepwise logistic regression analysis determined factors independently associated with recurrence of sciatica. Stepwise logistic regression revealed kissing spine ( p =0.024; odds ratio, 3.80) and foraminal stenosis ( p <0.01; odds ratio, 17.89) as independent risk factors for the recurrence of sciatica after posterior lumbar spinal decompression with spinous process floating procedures for lumbar spinal canal stenosis. When a patient shows kissing spine and concomitant subclinical foraminal stenosis at the affected level, we should sufficiently discuss the selection of an appropriate surgical procedure.
Counting pollen grains using readily available, free image processing and analysis software.
Costa, Clayton M; Yang, Suann
2009-10-01
Although many methods exist for quantifying the number of pollen grains in a sample, there are few standard methods that are user-friendly, inexpensive and reliable. The present contribution describes a new method of counting pollen using readily available, free image processing and analysis software. Pollen was collected from anthers of two species, Carduus acanthoides and C. nutans (Asteraceae), then illuminated on slides and digitally photographed through a stereomicroscope. Using ImageJ (NIH), these digital images were processed to remove noise and sharpen individual pollen grains, then analysed to obtain a reliable total count of the number of grains present in the image. A macro was developed to analyse multiple images together. To assess the accuracy and consistency of pollen counting by ImageJ analysis, counts were compared with those made by the human eye. Image analysis produced pollen counts in 60 s or less per image, considerably faster than counting with the human eye (5-68 min). In addition, counts produced with the ImageJ procedure were similar to those obtained by eye. Because count parameters are adjustable, this image analysis protocol may be used for many other plant species. Thus, the method provides a quick, inexpensive and reliable solution to counting pollen from digital images, not only reducing the chance of error but also substantially lowering labour requirements.
Real-time Interpolation for True 3-Dimensional Ultrasound Image Volumes
Ji, Songbai; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.
2013-01-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1–2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm3 voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery. PMID:21266563
Real-time interpolation for true 3-dimensional ultrasound image volumes.
Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D
2011-02-01
We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.
Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim
2009-01-01
This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078
GAP: yet another image processing system for solar observations.
NASA Astrophysics Data System (ADS)
Keller, C. U.
GAP is a versatile, interactive image processing system for analyzing solar observations, in particular extended time sequences, and for preparing publication quality figures. It consists of an interpreter that is based on a language with a control flow similar to PASCAL and C. The interpreter may be accessed from a command line editor and from user-supplied functions, procedures, and command scripts. GAP is easily expandable via external FORTRAN programs that are linked to the GAP interface routines. The current version of GAP runs on VAX, DECstation, Sun, and Apollo computers. Versions for MS-DOS and OS/2 are in preparation.
NASA Astrophysics Data System (ADS)
Loreggia, D.; Tataranni, F.; Trivero, P.; Biamino, W.; Di Matteo, L.
2017-10-01
We present the implementation of a procedure to adapt an Asymmetric Wiener Filtering (AWF) methodology aimed to detect and discard ghost signal due to azimuth ambiguities in SAR images to the case for X-band Cosmo Sky Med (CSK) images in the framework of SEASAFE (Slick Emissions And Ship Automatic Features Extraction) project, developed at the Department of Science and Technology Innovation of the University of Piemonte Orientale, Alessandria, Italy. SAR is a useful tool to daily and nightly monitoring of the sea surface in all weather conditions. SEASAFE project is a software platform developed in IDL language able to process data in C- Land X-band SAR images with enhanced algorithm modules for land masking, sea pollution (oil spills) and ship detection; wind and wave evaluation are also available. In this contest, the need to individuate and discard false alarms is a critical requirement. The azimuth ambiguity is one of the main causes that generate false alarm in the ship detection procedure. Many methods to face with this problem were proposed and presented in recent literature. After a review of different approach to this problem, we describe the procedure to adapt the AWF approach presented in [1,2] to the case of X-band CSK images by implementing a selective blocks approach.
The collaboration of grouping laws in vision.
Grompone von Gioi, Rafael; Delon, Julie; Morel, Jean-Michel
2012-01-01
Gestalt theory gives a list of geometric grouping laws that could in principle give a complete account of human image perception. Based on an extensive thesaurus of clever graphical images, this theory discusses how grouping laws collaborate, and conflict toward a global image understanding. Unfortunately, as shown in the bibliographical analysis herewith, the attempts to formalize the grouping laws in computer vision and psychophysics have at best succeeded to compute individual partial structures (or partial gestalts), such as alignments or symmetries. Nevertheless, we show here that a never formalized clever Gestalt experimental procedure, the Nachzeichnung suggests a numerical set up to implement and test the collaboration of partial gestalts. The new computational procedure proposed here analyzes a digital image, and performs a numerical simulation that we call Nachtanz or Gestaltic dance. In this dance, the analyzed digital image is gradually deformed in a random way, but maintaining the detected partial gestalts. The resulting dancing images should be perceptually indistinguishable if and only if the grouping process was complete. Like the Nachzeichnung, the Nachtanz permits a visual exploration of the degrees of freedom still available to a figure after all partial groups (or gestalts) have been detected. In the new proposed procedure, instead of drawing themselves, subjects will be shown samples of the automatic Gestalt dances and required to evaluate if the figures are similar. Several numerical preliminary results with this new Gestaltic experimental setup are thoroughly discussed. Copyright © 2012 Elsevier Ltd. All rights reserved.
Huang, Huajun; Xiang, Chunling; Zeng, Canjun; Ouyang, Hanbin; Wong, Kelvin Kian Loong; Huang, Wenhua
2015-12-01
We improved the geometrical modeling procedure for fast and accurate reconstruction of orthopedic structures. This procedure consists of medical image segmentation, three-dimensional geometrical reconstruction, and assignment of material properties. The patient-specific orthopedic structures reconstructed by this improved procedure can be used in the virtual surgical planning, 3D printing of real orthopedic structures and finite element analysis. A conventional modeling consists of: image segmentation, geometrical reconstruction, mesh generation, and assignment of material properties. The present study modified the conventional method to enhance software operating procedures. Patient's CT images of different bones were acquired and subsequently reconstructed to give models. The reconstruction procedures were three-dimensional image segmentation, modification of the edge length and quantity of meshes, and the assignment of material properties according to the intensity of gravy value. We compared the performance of our procedures to the conventional procedures modeling in terms of software operating time, success rate and mesh quality. Our proposed framework has the following improvements in the geometrical modeling: (1) processing time: (femur: 87.16 ± 5.90 %; pelvis: 80.16 ± 7.67 %; thoracic vertebra: 17.81 ± 4.36 %; P < 0.05); (2) least volume reduction (femur: 0.26 ± 0.06 %; pelvis: 0.70 ± 0.47, thoracic vertebra: 3.70 ± 1.75 %; P < 0.01) and (3) mesh quality in terms of aspect ratio (femur: 8.00 ± 7.38 %; pelvis: 17.70 ± 9.82 %; thoracic vertebra: 13.93 ± 9.79 %; P < 0.05) and maximum angle (femur: 4.90 ± 5.28 %; pelvis: 17.20 ± 19.29 %; thoracic vertebra: 3.86 ± 3.82 %; P < 0.05). Our proposed patient-specific geometrical modeling requires less operating time and workload, but the orthopedic structures were generated at a higher rate of success as compared with the conventional method. It is expected to benefit the surgical planning of orthopedic structures with less operating time and high accuracy of modeling.
Characterization of laser beam transmission through a High Density Polyethylene (HDPE) plate
NASA Astrophysics Data System (ADS)
Genna, S.; Leone, C.; Tagliaferri, V.
2017-02-01
Infrared (IR) light propagation in semicrystalline polymers involves mechanisms such as reflection, transmission, absorption and internal scattering. These different rates determine either the interaction mechanism, either the temperatures reached in the IR heating processes. Consequently, the knowledge of these rates is fundamental in the development of IR heating processes in order to avoid the polymer's damage and to increase the process energy efficiency. Aim of this work is to assess a simple procedure to determine the rates of absorbed, reflected, transmitted and scattered energy in the case of an unfilled High Density Polyethylene (HDPE) plate. Experimental tests were performed by exposing a HDPE plate, 3 mm in thickness, to a diode laser source, working at the fundamental wavelength of 975 nm. The transmitted power was measured by power meter, the reflected one by applying the Beer-Lambert law to sample of different thickness. IR thermal images were adopted to measure the absorbed ratio. The scattered ratio was measured by energetic balance, as difference between the incoming power and the other ratios. Finally, IR thermal images were adopted to measure the scattered ratio and to validate the procedure.
Ink-constrained halftoning with application to QR codes
NASA Astrophysics Data System (ADS)
Bayeh, Marzieh; Compaan, Erin; Lindsey, Theodore; Orlow, Nathan; Melczer, Stephen; Voller, Zachary
2014-01-01
This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.
Adaptive near-field beamforming techniques for sound source imaging.
Cho, Yong Thung; Roan, Michael J
2009-02-01
Phased array signal processing techniques such as beamforming have a long history in applications such as sonar for detection and localization of far-field sound sources. Two sometimes competing challenges arise in any type of spatial processing; these are to minimize contributions from directions other than the look direction and minimize the width of the main lobe. To tackle this problem a large body of work has been devoted to the development of adaptive procedures that attempt to minimize side lobe contributions to the spatial processor output. In this paper, two adaptive beamforming procedures-minimum variance distorsionless response and weight optimization to minimize maximum side lobes--are modified for use in source visualization applications to estimate beamforming pressure and intensity using near-field pressure measurements. These adaptive techniques are compared to a fixed near-field focusing technique (both techniques use near-field beamforming weightings focusing at source locations estimated based on spherical wave array manifold vectors with spatial windows). Sound source resolution accuracies of near-field imaging procedures with different weighting strategies are compared using numerical simulations both in anechoic and reverberant environments with random measurement noise. Also, experimental results are given for near-field sound pressure measurements of an enclosed loudspeaker.
Lim, Byoung-Gyun; Woo, Jea-Choon; Lee, Hee-Young; Kim, Young-Soo
2008-01-01
Synthetic wideband waveforms (SWW) combine a stepped frequency CW waveform and a chirp signal waveform to achieve high range resolution without requiring a large bandwidth or the consequent very high sampling rate. If an efficient algorithm like the range-Doppler algorithm (RDA) is used to acquire the SAR images for synthetic wideband signals, errors occur due to approximations, so the images may not show the best possible result. This paper proposes a modified subpulse SAR processing algorithm for synthetic wideband signals which is based on RDA. An experiment with an automobile-based SAR system showed that the proposed algorithm is quite accurate with a considerable improvement in resolution and quality of the obtained SAR image. PMID:27873984
High-speed video capillaroscopy method for imaging and evaluation of moving red blood cells
NASA Astrophysics Data System (ADS)
Gurov, Igor; Volkov, Mikhail; Margaryants, Nikita; Pimenov, Aleksei; Potemkin, Andrey
2018-05-01
The video capillaroscopy system with high image recording rate to resolve moving red blood cells with velocity up to 5 mm/s into a capillary is considered. Proposed procedures of the recorded video sequence processing allow evaluating spatial capillary area, capillary diameter and central line with high accuracy and reliability independently on properties of individual capillary. Two-dimensional inter frame procedure is applied to find lateral shift of neighbor images in the blood flow area with moving red blood cells and to measure directly the blood flow velocity along a capillary central line. The developed method opens new opportunities for biomedical diagnostics, particularly, due to long-time continuous monitoring of red blood cells velocity into capillary. Spatio-temporal representation of capillary blood flow is considered. Experimental results of direct measurement of blood flow velocity into separate capillary as well as capillary net are presented and discussed.
Potential and limitations of webcam images for snow cover monitoring in the Swiss Alps
NASA Astrophysics Data System (ADS)
Dizerens, Céline; Hüsler, Fabia; Wunderle, Stefan
2017-04-01
In Switzerland, several thousands of outdoor webcams are currently connected to the Internet. They deliver freely available images that can be used to analyze snow cover variability on a high spatio-temporal resolution. To make use of this big data source, we have implemented a webcam-based snow cover mapping procedure, which allows to almost automatically derive snow cover maps from such webcam images. As there is mostly no information about the webcams and its parameters available, our registration approach automatically resolves these parameters (camera orientation, principal point, field of view) by using an estimate of the webcams position, the mountain silhouette, and a high-resolution digital elevation model (DEM). Combined with an automatic snow classification and an image alignment using SIFT features, our procedure can be applied to arbitrary images to generate snow cover maps with a minimum of effort. Resulting snow cover maps have the same resolution as the digital elevation model and indicate whether each grid cell is snow-covered, snow-free, or hidden from webcams' positions. Up to now, we processed images of about 290 webcams from our archive, and evaluated images of 20 webcams using manually selected ground control points (GCPs) to evaluate the mapping accuracy of our procedure. We present methodological limitations and ongoing improvements, show some applications of our snow cover maps, and demonstrate that webcams not only offer a great opportunity to complement satellite-derived snow retrieval under cloudy conditions, but also serve as a reference for improved validation of satellite-based approaches.
A three-image algorithm for hard x-ray grating interferometry.
Pelliccia, Daniele; Rigon, Luigi; Arfelli, Fulvia; Menk, Ralf-Hendrik; Bukreeva, Inna; Cedola, Alessia
2013-08-12
A three-image method to extract absorption, refraction and scattering information for hard x-ray grating interferometry is presented. The method comprises a post-processing approach alternative to the conventional phase stepping procedure and is inspired by a similar three-image technique developed for analyzer-based x-ray imaging. Results obtained with this algorithm are quantitatively comparable with phase-stepping. This method can be further extended to samples with negligible scattering, where only two images are needed to separate absorption and refraction signal. Thanks to the limited number of images required, this technique is a viable route to bio-compatible imaging with x-ray grating interferometer. In addition our method elucidates and strengthens the formal and practical analogies between grating interferometry and the (non-interferometric) diffraction enhanced imaging technique.
Zikmund, T; Kvasnica, L; Týč, M; Křížová, A; Colláková, J; Chmelík, R
2014-11-01
Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The study of the cell is based on extraction of the dynamic data on cell behaviour from the time-lapse sequence of the phase images. However, the phase images are affected by the phase aberrations that make the analysis particularly difficult. This is because the phase deformation is prone to change during long-term experiments. Here, we present a novel algorithm for sequential processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least-squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. All these procedures are performed automatically and applied immediately after obtaining every single phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment by playback of the recorded sequence up to actual time. Such operator's intervention is a forerunner of process automation derived from image analysis. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
Referral criteria and clinical decision support: radiological protection aspects for justification.
Pérez, M del Rosario
2015-06-01
Advanced imaging technology has opened new horizons for medical diagnostics and improved patient care. However, many procedures are unjustified and do not provide a net benefit. An area of particular concern is the unnecessary use of radiation when clinical evaluation or other imaging modalities could provide an accurate diagnosis. Referral criteria for medical imaging are consensus statements based on the best-available evidence to assist the decision-making process when choosing the best imaging procedure for a given patient. Although they are advisory rather than compulsory, physicians should have good reasons for deviation from these criteria. Voluntary use of referral criteria has shown limited success compared with integration into clinical decision support systems. These systems support good medical practice, can improve health service delivery, and foster safer, more efficient, fair, cost-effective care, thus contributing to the strengthening of health systems. Justification of procedures and optimisation of protection, the two pillars of radiological protection in health care, are implicit in the notion of good medical practice. However, some health professionals are not familiar with these principles, and have low awareness of radiological protection aspects of justification. A stronger collaboration between radiation protection and healthcare communities could contribute to improve the radiation protection culture in medical practice. © The Chartered Institution of Building Services Engineers 2014.
Anti-nuclear antibody screening using HEp-2 cells.
Buchner, Carol; Bryant, Cassandra; Eslami, Anna; Lakos, Gabriella
2014-06-23
The American College of Rheumatology position statement on ANA testing stipulates the use of IIF as the gold standard method for ANA screening(1). Although IIF is an excellent screening test in expert hands, the technical difficulties of processing and reading IIF slides--such as the labor intensive slide processing, manual reading, the need for experienced, trained technologists and the use of dark room--make the IIF method difficult to fit in the workflow of modern, automated laboratories. The first and crucial step towards high quality ANA screening is careful slide processing. This procedure is labor intensive, and requires full understanding of the process, as well as attention to details and experience. Slide reading is performed by fluorescent microscopy in dark rooms, and is done by trained technologists who are familiar with the various patterns, in the context of cell cycle and the morphology of interphase and dividing cells. Provided that IIF is the first line screening tool for SARD, understanding the steps to correctly perform this technique is critical. Recently, digital imaging systems have been developed for the automated reading of IIF slides. These systems, such as the NOVA View Automated Fluorescent Microscope, are designed to streamline the routine IIF workflow. NOVA View acquires and stores high resolution digital images of the wells, thereby separating image acquisition from interpretation; images are viewed an interpreted on high resolution computer monitors. It stores images for future reference and supports the operator's interpretation by providing fluorescent light intensity data on the images. It also preliminarily categorizes results as positive or negative, and provides pattern recognition for positive samples. In summary, it eliminates the need for darkroom, and automates and streamlines the IIF reading/interpretation workflow. Most importantly, it increases consistency between readers and readings. Moreover, with the use of barcoded slides, transcription errors are eliminated by providing sample traceability and positive patient identification. This results in increased patient data integrity and safety. The overall goal of this video is to demonstrate the IIF procedure, including slide processing, identification of common IIF patterns, and the introduction of new advancements to simplify and harmonize this technique.
NASA Astrophysics Data System (ADS)
Kuckein, C.; Denker, C.; Verma, M.; Balthasar, H.; González Manrique, S. J.; Louis, R. E.; Diercke, A.
2017-10-01
A huge amount of data has been acquired with the GREGOR Fabry-Pérot Interferometer (GFPI), large-format facility cameras, and since 2016 with the High-resolution Fast Imager (HiFI). These data are processed in standardized procedures with the aim of providing science-ready data for the solar physics community. For this purpose, we have developed a user-friendly data reduction pipeline called ``sTools'' based on the Interactive Data Language (IDL) and licensed under creative commons license. The pipeline delivers reduced and image-reconstructed data with a minimum of user interaction. Furthermore, quick-look data are generated as well as a webpage with an overview of the observations and their statistics. All the processed data are stored online at the GREGOR GFPI and HiFI data archive of the Leibniz Institute for Astrophysics Potsdam (AIP). The principles of the pipeline are presented together with selected high-resolution spectral scans and images processed with sTools.
Analysis of x-ray hand images for bone age assessment
NASA Astrophysics Data System (ADS)
Serrat, Joan; Vitria, Jordi M.; Villanueva, Juan J.
1990-09-01
In this paper we describe a model-based system for the assessment of skeletal maturity on hand radiographs by the TW2 method. The problem consists in classiflying a set of bones appearing in an image in one of several stages described in an atlas. A first approach consisting in pre-processing segmentation and classification independent phases is also presented. However it is only well suited for well contrasted low noise images without superimposed bones were the edge detection by zero crossing of second directional derivatives is able to extract all bone contours maybe with little gaps and few false edges on the background. Hence the use of all available knowledge about the problem domain is needed to build a rather general system. We have designed a rule-based system for narrow down the rank of possible stages for each bone and guide the analysis process. It calls procedures written in conventional languages for matching stage models against the image and getting features needed in the classification process.
Automated processing for proton spectroscopic imaging using water reference deconvolution.
Maudsley, A A; Wu, Z; Meyerhoff, D J; Weiner, M W
1994-06-01
Automated formation of MR spectroscopic images (MRSI) is necessary before routine application of these methods is possible for in vivo studies; however, this task is complicated by the presence of spatially dependent instrumental distortions and the complex nature of the MR spectrum. A data processing method is presented for completely automated formation of in vivo proton spectroscopic images, and applied for analysis of human brain metabolites. This procedure uses the water reference deconvolution method (G. A. Morris, J. Magn. Reson. 80, 547(1988)) to correct for line shape distortions caused by instrumental and sample characteristics, followed by parametric spectral analysis. Results for automated image formation were found to compare favorably with operator dependent spectral integration methods. While the water reference deconvolution processing was found to provide good correction of spatially dependent resonance frequency shifts, it was found to be susceptible to errors for correction of line shape distortions. These occur due to differences between the water reference and the metabolite distributions.
The Landsat Data Continuity Mission Operational Land Imager (OLI) Radiometric Calibration
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Dabney, Philip W.; Murphy-Morris, Jeanine E.; Knight, Edward J.; Kvaran, Geir; Barsi, Julia A.
2010-01-01
The Operational Land Imager (OLI) on the Landsat Data Continuity Mission (LDCM) has a comprehensive radiometric characterization and calibration program beginning with the instrument design, and extending through integration and test, on-orbit operations and science data processing. Key instrument design features for radiometric calibration include dual solar diffusers and multi-lamped on-board calibrators. The radiometric calibration transfer procedure from NIST standards has multiple checks on the radiometric scale throughout the process and uses a heliostat as part of the transfer to orbit of the radiometric calibration. On-orbit lunar imaging will be used to track the instruments stability and side slither maneuvers will be used in addition to the solar diffuser to flat field across the thousands of detectors per band. A Calibration Validation Team is continuously involved in the process from design to operations. This team uses an Image Assessment System (IAS), part of the ground system to characterize and calibrate the on-orbit data.
NASA Astrophysics Data System (ADS)
Otoum, Nesreen A.; Edirisinghe, Eran A.; Dua, Harminder; Faraj, Lana
2012-06-01
Corneal Ulcers are a common eye disease that requires prompt treatment. Recently a number of treatment approaches have been introduced that have been proven to be very effective. Unfortunately, the monitoring process of the treatment procedure remains manual and hence time consuming and prone to human errors. In this research we propose an automatic image analysis based approach to measure the size of an ulcer and its subsequent further investigation to determine the effectiveness of any treatment process followed. In Ophthalmology an ulcer area is detected for further inspection via luminous excitation of a dye. Usually in the imaging systems utilised for this purpose (i.e. a slit lamp with an appropriate dye) the ulcer area is excited to be luminous green in colour as compared to rest of the cornea which appears blue/brown. In the proposed approach we analyse the image in the HVS colour space. Initially a pre-processing stage that carries out a local histogram equalisation is used to bring back detail in any over or under exposed areas. Secondly we deal with the removal of potential reflections from the affected areas by making use of image registration of two candidate corneal images based on the detected corneal areas. Thirdly the exact corneal boundary is detected by initially registering an ellipse to the candidate corneal boundary detected via edge detection and subsequently allowing the user to modify the boundary to overlap with the boundary of the ulcer being observed. Although this step makes the approach semi automatic, it removes the impact of breakages of the corneal boundary due to occlusion, noise, image quality degradations. The ratio between the ulcer area confined within the corneal area to the corneal area is used as a measure of comparison. We demonstrate the use of the proposed tool in the analysis of the effectiveness of a treatment procedure adopted for corneal ulcers in patients by comparing the variation of corneal size over time.
NASA Technical Reports Server (NTRS)
Guseman, L. F., Jr.; Decell, H. P., Jr.
1975-01-01
An outline for an Image 100 procedures manual for Earth Resources Program image analysis was developed which sets forth guidelines that provide a basis for the preparation and updating of an Image 100 Procedures Manual. The scope of the outline was limited to definition of general features of a procedures manual together with special features of an interactive system. Computer programs were identified which should be implemented as part of an applications oriented library for the system.
Colony image acquisition and segmentation
NASA Astrophysics Data System (ADS)
Wang, W. X.
2007-12-01
For counting of both colonies and plaques, there is a large number of applications including food, dairy, beverages, hygiene, environmental monitoring, water, toxicology, sterility testing, AMES testing, pharmaceuticals, paints, sterile fluids and fungal contamination. Recently, many researchers and developers have made efforts for this kind of systems. By investigation, some existing systems have some problems. The main problems are image acquisition and image segmentation. In order to acquire colony images with good quality, an illumination box was constructed as: the box includes front lightning and back lightning, which can be selected by users based on properties of colony dishes. With the illumination box, lightning can be uniform; colony dish can be put in the same place every time, which make image processing easy. The developed colony image segmentation algorithm consists of the sub-algorithms: (1) image classification; (2) image processing; and (3) colony delineation. The colony delineation algorithm main contain: the procedures based on grey level similarity, on boundary tracing, on shape information and colony excluding. In addition, a number of algorithms are developed for colony analysis. The system has been tested and satisfactory.
Fusion Imaging for Procedural Guidance.
Wiley, Brandon M; Eleid, Mackram F; Thaden, Jeremy J
2018-05-01
The field of percutaneous structural heart interventions has grown tremendously in recent years. This growth has fueled the development of new imaging protocols and technologies in parallel to help facilitate these minimally-invasive procedures. Fusion imaging is an exciting new technology that combines the strength of 2 imaging modalities and has the potential to improve procedural planning and the safety of many commonly performed transcatheter procedures. In this review we discuss the basic concepts of fusion imaging along with the relative strengths and weaknesses of static vs dynamic fusion imaging modalities. This review will focus primarily on echocardiographic-fluoroscopic fusion imaging and its application in commonly performed transcatheter structural heart procedures. Copyright © 2017 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
1993-11-01
4 Im age M etrics .......................................... 8 Analysis Procedures .................................... 14 3...trgtI’oi4.1 top) then ter jit I" to ,amtqts -i do eno; A26 Appendx A Metices Image Processing S,)ftware Source Code AGANETRIC 4 OF 8 Vat 1.J. k., I I integer...A 4 •A--TIC - OF 8 Appendx A Wbkri Image Prooiing Software Source Code A31 AGACOMPT I OF 3
Simulators for training in ultrasound guided procedures.
Farjad Sultan, Syed; Shorten, George; Iohom, Gabrielle
2013-06-01
The four major categories of skill sets associated with proficiency in ultrasound guided regional anaesthesia are 1) understanding device operations, 2) image optimization, 3) image interpretation and 4) visualization of needle insertion and injection of the local anesthetic solution. Of these, visualization of needle insertion and injection of local anaesthetic solution can be practiced using simulators and phantoms. This survey of existing simulators summarizes advantages and disadvantages of each. Current deficits pertain to the validation process.
Wang, Juan; Nishikawa, Robert M; Yang, Yongyi
2017-07-01
Mammograms acquired with full-field digital mammography (FFDM) systems are provided in both "for-processing'' and "for-presentation'' image formats. For-presentation images are traditionally intended for visual assessment by the radiologists. In this study, we investigate the feasibility of using for-presentation images in computerized analysis and diagnosis of microcalcification (MC) lesions. We make use of a set of 188 matched mammogram image pairs of MC lesions from 95 cases (biopsy proven), in which both for-presentation and for-processing images are provided for each lesion. We then analyze and characterize the MC lesions from for-presentation images and compare them with their counterparts in for-processing images. Specifically, we consider three important aspects in computer-aided diagnosis (CAD) of MC lesions. First, we quantify each MC lesion with a set of 10 image features of clustered MCs and 12 textural features of the lesion area. Second, we assess the detectability of individual MCs in each lesion from the for-presentation images by a commonly used difference-of-Gaussians (DoG) detector. Finally, we study the diagnostic accuracy in discriminating between benign and malignant MC lesions from the for-presentation images by a pretrained support vector machine (SVM) classifier. To accommodate the underlying background suppression and image enhancement in for-presentation images, a normalization procedure is applied. The quantitative image features of MC lesions from for-presentation images are highly consistent with that from for-processing images. The values of Pearson's correlation coefficient between features from the two formats range from 0.824 to 0.961 for the 10 MC image features, and from 0.871 to 0.963 for the 12 textural features. In detection of individual MCs, the FROC curve from for-presentation is similar to that from for-processing. In particular, at sensitivity level of 80%, the average number of false-positives (FPs) per image region is 9.55 for both for-presentation and for-processing images. Finally, for classifying MC lesions as malignant or benign, the area under the ROC curve is 0.769 in for-presentation, compared to 0.761 in for-processing (P = 0.436). The quantitative results demonstrate that MC lesions in for-presentation images are highly consistent with that in for-processing images in terms of image features, detectability of individual MCs, and classification accuracy between malignant and benign lesions. These results indicate that for-presentation images can be compatible with for-processing images for use in CAD algorithms for MC lesions. © 2017 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
De Silva, Tharindu; Cool, Derek W.; Romagnoli, Cesare; Fenster, Aaron; Ward, Aaron D.
2014-03-01
In targeted 3D transrectal ultrasound (TRUS)-guided biopsy, patient and prostate movement during the procedure can cause target misalignments that hinder accurate sampling of pre-planned suspicious tissue locations. Multiple solutions have been proposed for motion compensation via registration of intra-procedural TRUS images to a baseline 3D TRUS image acquired at the beginning of the biopsy procedure. While 2D TRUS images are widely used for intra-procedural guidance, some solutions utilize richer intra-procedural images such as bi- or multi-planar TRUS or 3D TRUS, acquired by specialized probes. In this work, we measured the impact of such richer intra-procedural imaging on motion compensation accuracy, to evaluate the tradeoff between cost and complexity of intra-procedural imaging versus improved motion compensation. We acquired baseline and intra-procedural 3D TRUS images from 29 patients at standard sextant-template biopsy locations. We used the planes extracted from the 3D intra-procedural scans to simulate 2D and 3D information available in different clinically relevant scenarios for registration. The registration accuracy was evaluated by calculating the target registration error (TRE) using manually identified homologous fiducial markers (micro-calcifications). Our results indicate that TRE improves gradually when the number of intra-procedural imaging planes used in registration is increased. Full 3D TRUS information helps the registration algorithm to robustly converge to more accurate solutions. These results can also inform the design of a fail-safe workflow during motion compensation in a system using a tracked 2D TRUS probe, by prescribing rotational acquisitions that can be performed quickly and easily by the physician immediately prior to needle targeting.
Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination
NASA Astrophysics Data System (ADS)
Spigulis, Janis; Oshina, Ilze; Berzina, Anna; Bykov, Alexander
2017-09-01
Chromophore distribution maps are useful tools for skin malformation severity assessment and for monitoring of skin recovery after burns, surgeries, and other interactions. The chromophore maps can be obtained by processing several spectral images of skin, e.g., captured by hyperspectral or multispectral cameras during seconds or even minutes. To avoid motion artifacts and simplify the procedure, a single-snapshot technique for mapping melanin, oxyhemoglobin, and deoxyhemoglobin of in-vivo skin by a smartphone under simultaneous three-wavelength (448-532-659 nm) laser illumination is proposed and examined. Three monochromatic spectral images related to the illumination wavelengths were extracted from the smartphone camera RGB image data set with respect to crosstalk between the RGB detection bands. Spectral images were further processed accordingly to Beer's law in a three chromophore approximation. Photon absorption path lengths in skin at the exploited wavelengths were estimated by means of Monte Carlo simulations. The technique was validated clinically on three kinds of skin lesions: nevi, hemangiomas, and seborrheic keratosis. Design of the developed add-on laser illumination system, image-processing details, and the results of clinical measurements are presented and discussed.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media
NASA Astrophysics Data System (ADS)
Edrei, Eitan; Scarcelli, Giuliano
2016-09-01
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Memory-effect based deconvolution microscopy for super-resolution imaging through scattering media.
Edrei, Eitan; Scarcelli, Giuliano
2016-09-16
High-resolution imaging through turbid media is a fundamental challenge of optical sciences that has attracted a lot of attention in recent years for its wide range of potential applications. Here, we demonstrate that the resolution of imaging systems looking behind a highly scattering medium can be improved below the diffraction-limit. To achieve this, we demonstrate a novel microscopy technique enabled by the optical memory effect that uses a deconvolution image processing and thus it does not require iterative focusing, scanning or phase retrieval procedures. We show that this newly established ability of direct imaging through turbid media provides fundamental and practical advantages such as three-dimensional refocusing and unambiguous object reconstruction.
Multiresolution 3-D reconstruction from side-scan sonar images.
Coiras, Enrique; Petillot, Yvan; Lane, David M
2007-02-01
In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.
Monitoring radiation use in cardiac fluoroscopy imaging procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Nathaniel T.; Steiner, Stefan H.; Smith, Ian R.
2011-01-15
Purpose: Timely identification of systematic changes in radiation delivery of an imaging system can lead to a reduction in risk for the patients involved. However, existing quality assurance programs involving the routine testing of equipment performance using phantoms are limited in their ability to effectively carry out this task. To address this issue, the authors propose the implementation of an ongoing monitoring process that utilizes procedural data to identify unexpected large or small radiation exposures for individual patients, as well as to detect persistent changes in the radiation output of imaging platforms. Methods: Data used in this study were obtainedmore » from records routinely collected during procedures performed in the cardiac catheterization imaging facility at St. Andrew's War Memorial Hospital, Brisbane, Australia, over the period January 2008-March 2010. A two stage monitoring process employing individual and exponentially weighted moving average (EWMA) control charts was developed and used to identify unexpectedly high or low radiation exposure levels for individual patients, as well as detect persistent changes in the radiation output delivered by the imaging systems. To increase sensitivity of the charts, we account for variation in dose area product (DAP) values due to other measured factors (patient weight, fluoroscopy time, and digital acquisition frame count) using multiple linear regression. Control charts are then constructed using the residual values from this linear regression. The proposed monitoring process was evaluated using simulation to model the performance of the process under known conditions. Results: Retrospective application of this technique to actual clinical data identified a number of cases in which the DAP result could be considered unexpected. Most of these, upon review, were attributed to data entry errors. The charts monitoring the overall system radiation output trends demonstrated changes in equipment performance associated with relocation of the equipment to a new department. When tested under simulated conditions, the EWMA chart was capable of detecting a sustained 15% increase in average radiation output within 60 cases (<1 month of operation), while a 33% increase would be signaled within 20 cases. Conclusions: This technique offers a valuable enhancement to existing quality assurance programs in radiology that rely upon the testing of equipment radiation output at discrete time frames to ensure performance security.« less
Liu, Keyin; Kong, Xiuqi; Ma, Yanyan; Lin, Weiying
2018-05-01
Carbon monoxide (CO) is a key gaseous signaling molecule in living cells and organisms. This protocol illustrates the synthesis of a highly sensitive Nile Red (NR)-Pd-based fluorescent probe, NR-PdA, and its applications for detecting endogenous CO in tissue culture cells, ex vivo organs, and zebrafish embryos. In the NR-PdA synthesis process, 3-diethylamine phenol reacts with sodium nitrite in the acidic condition to afford 5-(diethylamino)-2-nitrosophenol hydrochloride (compound 1), which is further treated with 1-naphthalenol at a high temperature to provide the NR dye via a cyclization reaction. Finally, NR is reacted with palladium acetate to obtain the desired Pd-based fluorescent probe NR-PdA. NR-PdA possesses excellent two-photon excitation and near-IR emission properties, high stability, low background fluorescence, and a low detection limit. In addition to the chemical synthesis procedures, we provide step-by-step procedures for imaging endogenous CO in RAW 264.7 cells, mouse organs ex vivo, and live zebrafish embryos. The synthesis process for the probe requires ∼4 d, and the biological imaging experiments take ∼14 d.
Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic
NASA Astrophysics Data System (ADS)
Hata, Yutaka
2010-04-01
First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull sonography system that can visualize the shape of the skull and brain surface from any point to examine skull fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are successfully determined.
Cryo-EM Structure Determination Using Segmented Helical Image Reconstruction.
Fromm, S A; Sachse, C
2016-01-01
Treating helices as single-particle-like segments followed by helical image reconstruction has become the method of choice for high-resolution structure determination of well-ordered helical viruses as well as flexible filaments. In this review, we will illustrate how the combination of latest hardware developments with optimized image processing routines have led to a series of near-atomic resolution structures of helical assemblies. Originally, the treatment of helices as a sequence of segments followed by Fourier-Bessel reconstruction revealed the potential to determine near-atomic resolution structures from helical specimens. In the meantime, real-space image processing of helices in a stack of single particles was developed and enabled the structure determination of specimens that resisted classical Fourier helical reconstruction and also facilitated high-resolution structure determination. Despite the progress in real-space analysis, the combination of Fourier and real-space processing is still commonly used to better estimate the symmetry parameters as the imposition of the correct helical symmetry is essential for high-resolution structure determination. Recent hardware advancement by the introduction of direct electron detectors has significantly enhanced the image quality and together with improved image processing procedures has made segmented helical reconstruction a very productive cryo-EM structure determination method. © 2016 Elsevier Inc. All rights reserved.
Vaccaro, G; Pelaez, J I; Gil, J A
2016-07-01
Objective masticatory performance assessment using two-coloured specimens relies on image processing techniques; however, just a few approaches have been tested and no comparative studies are reported. The aim of this study was to present a selection procedure of the optimal image analysis method for masticatory performance assessment with a given two-coloured chewing gum. Dentate participants (n = 250; 25 ± 6·3 years) chewed red-white chewing gums for 3, 6, 9, 12, 15, 18, 21 and 25 cycles (2000 samples). Digitalised images of retrieved specimens were analysed using 122 image processing methods (IPMs) based on feature extraction algorithms (pixel values and histogram analysis). All IPMs were tested following the criteria of: normality of measurements (Kolmogorov-Smirnov), ability to detect differences among mixing states (anova corrected with post hoc Bonferroni) and moderate-to-high correlation with the number of cycles (Spearman's Rho). The optimal IPM was chosen using multiple criteria decision analysis (MCDA). Measurements provided by all IPMs proved to be normally distributed (P < 0·05), 116 proved sensible to mixing states (P < 0·05), and 35 showed moderate-to-high correlation with the number of cycles (|ρ| > 0·5; P < 0·05). The variance of the histogram of the Hue showed the highest correlation with the number of cycles (ρ = 0·792; P < 0·0001) and the highest MCDA score (optimal). The proposed procedure proved to be reliable and able to select the optimal approach among multiple IPMs. This experiment may be reproduced to identify the optimal approach for each case of locally available test foods. © 2016 John Wiley & Sons Ltd.
Ex-vivo multiphoton analysis of rabbit corneal wound healing following photorefractive keratectomy
NASA Astrophysics Data System (ADS)
Wang, Tsung-Jen; Lo, Wen; Dong, Chen-Yuan; Hu, Fung-Rong
2008-02-01
The aim of this study is to assess the application of multiphoton autofluorescence and second harmonic generation (SHG) microscopy for investigating corneal wound healing after high myopic (-10.0D) photorefractive keratectomy (PRK) procedures on the rabbit eyes. The effect of PRK on the morphology and distribution of keratocytes were investigated using multiphoton excited autofluorescence imaging, while the effect of PRK on the arrangement of collagen fibers was monitored by second-harmonic generation imaging. Without histological processing, multiphoton microscopy is able to characterize corneal damage and wound healing from PRK. Our results show that this technique has potential application in the clinical evaluation of corneal damage due to refractive surgery, and may be used to study the unwanted side effects of these procedures.
NASA Astrophysics Data System (ADS)
Marrugo, Andrés G.; Millán, María S.; Cristóbal, Gabriel; Gabarda, Salvador; Sorel, Michal; Sroubek, Filip
2012-06-01
Medical digital imaging has become a key element of modern health care procedures. It provides visual documentation and a permanent record for the patients, and most important the ability to extract information about many diseases. Modern ophthalmology thrives and develops on the advances in digital imaging and computing power. In this work we present an overview of recent image processing techniques proposed by the authors in the area of digital eye fundus photography. Our applications range from retinal image quality assessment to image restoration via blind deconvolution and visualization of structural changes in time between patient visits. All proposed within a framework for improving and assisting the medical practice and the forthcoming scenario of the information chain in telemedicine.
Proof-of-concept of a laser mounted endoscope for touch-less navigated procedures
Kral, Florian; Gueler, Oezguer; Perwoeg, Martina; Bardosi, Zoltan; Puschban, Elisabeth J; Riechelmann, Herbert; Freysinger, Wolfgang
2013-01-01
Background and Objectives During navigated procedures a tracked pointing device is used to define target structures in the patient to visualize its position in a registered radiologic data set. When working with endoscopes in minimal invasive procedures, the target region is often difficult to reach and changing instruments is disturbing in a challenging, crucial moment of the procedure. We developed a device for touch less navigation during navigated endoscopic procedures. Materials and Methods A laser beam is delivered to the tip of a tracked endoscope angled to its axis. Thereby the position of the laser spot in the video-endoscopic images changes according to the distance between the tip of the endoscope and the target structure. A mathematical function is defined by a calibration process and is used to calculate the distance between the tip of the endoscope and the target. The tracked tip of the endoscope and the calculated distance is used to visualize the laser spot in the registered radiologic data set. Results In comparison to the tracked instrument, the touch less target definition with the laser spot yielded in an over and above error of 0.12 mm. The overall application error in this experimental setup with a plastic head was 0.61 ± 0.97 mm (95% CI −1.3 to +2.5 mm). Conclusion Integrating a laser in an endoscope and then calculating the distance to a target structure by image processing of the video endoscopic images is accurate. This technology eliminates the need for tracked probes intraoperatively and therefore allows navigation to be integrated seamlessly in clinical routine. However, it is an additional chain link in the sequence of computer-assisted surgery thus influencing the application error. Lasers Surg. Med. 45:377–382, 2013. © 2013 Wiley Periodicals, Inc. PMID:23737122
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merritt, Z; Dave, J; Eschelman, D
Purpose: To investigate the effects of image receptor technology and dose reduction software on radiation dose estimates for most frequently performed fluoroscopically-guided interventional (FGI) procedures at a tertiary health care center. Methods: IRB approval was obtained for retrospective analysis of FGI procedures performed in the interventional radiology suites between January-2011 and December-2015. This included procedures performed using image-intensifier (II) based systems which were subsequently replaced, flat-panel-detector (FPD) based systems which were later upgraded with ClarityIQ dose reduction software (Philips Healthcare) and relatively new FPD system already equipped with ClarityIQ. Post procedure, technologists entered system-reported cumulative air kerma (CAK) and kerma-areamore » product (KAP; only KAP for II based systems) in RIS; these values were analyzed. Data pre-processing included correcting typographical errors and cross-verifying CAK and KAP. The most frequent high and low dose FGI procedures were identified and corresponding CAK and KAP values were compared. Results: Out of 27,251 procedures within this time period, most frequent high and low dose procedures were chemo/immuno-embolization (n=1967) and abscess drainage (n=1821). Mean KAP for embolization and abscess drainage procedures were 260,657, 310,304 and 94,908 mGycm{sup 2}, and 14,497, 15,040 and 6307 mGycm{sup 2} using II-, FPD- and FPD with ClarityIQ- based systems, respectively. Statistically significant differences were observed in KAP values for embolization procedures with respect to different systems but for abscess drainage procedures significant differences were only noted between systems with FPD and FPD with ClarityIQ (p<0.05). Mean CAK reduced significantly from 823 to 308 mGy and from 43 to 21 mGy for embolization and abscess drainage procedures, respectively, in transitioning to FPD systems with ClarityIQ (p<0.05). Conclusion: While transitioning from II- to FPD- based systems was not associated with dose reduction for the most frequently performed FGI procedures, substantial dose reduction was noted with relatively newer systems and dose reduction software.« less
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
Optical imaging probes in oncology
Martelli, Cristina; Dico, Alessia Lo; Diceglie, Cecilia; Lucignani, Giovanni; Ottobrini, Luisa
2016-01-01
Cancer is a complex disease, characterized by alteration of different physiological molecular processes and cellular features. Keeping this in mind, the possibility of early identification and detection of specific tumor biomarkers by non-invasive approaches could improve early diagnosis and patient management. Different molecular imaging procedures provide powerful tools for detection and non-invasive characterization of oncological lesions. Clinical studies are mainly based on the use of computed tomography, nuclear-based imaging techniques and magnetic resonance imaging. Preclinical imaging in small animal models entails the use of dedicated instruments, and beyond the already cited imaging techniques, it includes also optical imaging studies. Optical imaging strategies are based on the use of luminescent or fluorescent reporter genes or injectable fluorescent or luminescent probes that provide the possibility to study tumor features even by means of fluorescence and luminescence imaging. Currently, most of these probes are used only in animal models, but the possibility of applying some of them also in the clinics is under evaluation. The importance of tumor imaging, the ease of use of optical imaging instruments, the commercial availability of a wide range of probes as well as the continuous description of newly developed probes, demonstrate the significance of these applications. The aim of this review is providing a complete description of the possible optical imaging procedures available for the non-invasive assessment of tumor features in oncological murine models. In particular, the characteristics of both commercially available and newly developed probes will be outlined and discussed. PMID:27145373
NASA Astrophysics Data System (ADS)
Strocchi, S.; Ghielmi, M.; Basilico, F.; Macchi, A.; Novario, R.; Ferretti, R.; Binaghi, E.
2016-03-01
This work quantitatively evaluates the effects induced by susceptibility characteristics of materials commonly used in dental practice on the quality of head MR images in a clinical 1.5T device. The proposed evaluation procedure measures the image artifacts induced by susceptibility in MR images by providing an index consistent with the global degradation as perceived by the experts. Susceptibility artifacts were evaluated in a near-clinical setup, using a phantom with susceptibility and geometric characteristics similar to that of a human head. We tested different dentist materials, called PAL Keramit, Ti6Al4V-ELI, Keramit NP, ILOR F, Zirconia and used different clinical MR acquisition sequences, such as "classical" SE and fast, gradient, and diffusion sequences. The evaluation is designed as a matching process between reference and artifacts affected images recording the same scene. The extent of the degradation induced by susceptibility is then measured in terms of similarity with the corresponding reference image. The matching process involves a multimodal registration task and the use an adequate similarity index psychophysically validated, based on correlation coefficient. The proposed analyses are integrated within a computer-supported procedure that interactively guides the users in the different phases of the evaluation method. 2-Dimensional and 3-dimensional indexes are used for each material and each acquisition sequence. From these, we drew a ranking of the materials, averaging the results obtained. Zirconia and ILOR F appear to be the best choice from the susceptibility artefacts point of view, followed, in order, by PAL Keramit, Ti6Al4V-ELI and Keramit NP.
Li, Yang; Foss, Catherine A; Pomper, Martin G; Yu, S Michael
2014-01-31
Collagen is a major structural component of the extracellular matrix that supports tissue formation and maintenance. Although collagen remodeling is an integral part of normal tissue renewal, excessive amount of remodeling activity is involved in tumors, arthritis, and many other pathological conditions. During collagen remodeling, the triple helical structure of collagen molecules is disrupted by proteases in the extracellular environment. In addition, collagens present in many histological tissue samples are partially denatured by the fixation and preservation processes. Therefore, these denatured collagen strands can serve as effective targets for biological imaging. We previously developed a caged collagen mimetic peptide (CMP) that can be photo-triggered to hybridize with denatured collagen strands by forming triple helical structure, which is unique to collagens. The overall goals of this procedure are i) to image denatured collagen strands resulting from normal remodeling activities in vivo, and ii) to visualize collagens in ex vivo tissue sections using the photo-triggered caged CMPs. To achieve effective hybridization and successful in vivo and ex vivo imaging, fluorescently labeled caged CMPs are either photo-activated immediately before intravenous injection, or are directly activated on tissue sections. Normal skeletal collagen remolding in nude mice and collagens in prefixed mouse cornea tissue sections are imaged in this procedure. The imaging method based on the CMP-collagen hybridization technology presented here could lead to deeper understanding of the tissue remodeling process, as well as allow development of new diagnostics for diseases associated with high collagen remodeling activity.
Automatic Nuclei Segmentation in H&E Stained Breast Cancer Histopathology Images
Veta, Mitko; van Diest, Paul J.; Kornegoor, Robert; Huisman, André; Viergever, Max A.; Pluim, Josien P. W.
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8. PMID:23922958
Automatic nuclei segmentation in H&E stained breast cancer histopathology images.
Veta, Mitko; van Diest, Paul J; Kornegoor, Robert; Huisman, André; Viergever, Max A; Pluim, Josien P W
2013-01-01
The introduction of fast digital slide scanners that provide whole slide images has led to a revival of interest in image analysis applications in pathology. Segmentation of cells and nuclei is an important first step towards automatic analysis of digitized microscopy images. We therefore developed an automated nuclei segmentation method that works with hematoxylin and eosin (H&E) stained breast cancer histopathology images, which represent regions of whole digital slides. The procedure can be divided into four main steps: 1) pre-processing with color unmixing and morphological operators, 2) marker-controlled watershed segmentation at multiple scales and with different markers, 3) post-processing for rejection of false regions and 4) merging of the results from multiple scales. The procedure was developed on a set of 21 breast cancer cases (subset A) and tested on a separate validation set of 18 cases (subset B). The evaluation was done in terms of both detection accuracy (sensitivity and positive predictive value) and segmentation accuracy (Dice coefficient). The mean estimated sensitivity for subset A was 0.875 (±0.092) and for subset B 0.853 (±0.077). The mean estimated positive predictive value was 0.904 (±0.075) and 0.886 (±0.069) for subsets A and B, respectively. For both subsets, the distribution of the Dice coefficients had a high peak around 0.9, with the vast majority of segmentations having values larger than 0.8.
NASA Astrophysics Data System (ADS)
Lifshitz, Ronen; Kimchy, Yoav; Gelbard, Nir; Leibushor, Avi; Golan, Oleg; Elgali, Avner; Hassoon, Salah; Kaplan, Max; Smirnov, Michael; Shpigelman, Boaz; Bar-Ilan, Omer; Rubin, Daniel; Ovadia, Alex
2017-03-01
An ingestible capsule for colorectal cancer screening, based on ionizing-radiation imaging, has been developed and is in advanced stages of system stabilization and clinical evaluation. The imaging principle allows future patients using this technology to avoid bowel cleansing, and to continue the normal life routine during procedure. The Check-Cap capsule, or C-Scan ® Cap, imaging principle is essentially based on reconstructing scattered radiation, while both radiation source and radiation detectors reside within the capsule. The radiation source is a custom-made radioisotope encased in a small canister, collimated into rotating beams. While traveling along the human colon, irradiation occurs from within the capsule towards the colon wall. Scattering of radiation occurs both inside and outside the colon segment; some of this radiation is scattered back and detected by sensors onboard the capsule. During procedure, the patient receives small amounts of contrast agent as an addition to his/her normal diet. The presence of contrast agent inside the colon dictates the dominant physical processes to become Compton Scattering and X-Ray Fluorescence (XRF), which differ mainly by the energy of scattered photons. The detector readout electronics incorporates low-noise Single Photon Counting channels, allowing separation between the products of these different physical processes. Separating between radiation energies essentially allows estimation of the distance from the capsule to the colon wall, hence structural imaging of the intraluminal surface. This allows imaging of structural protrusions into the colon volume, especially focusing on adenomas that may develop into colorectal cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fang, Y; Huang, H; Su, T
Purpose: Texture-based quantification of image heterogeneity has been a popular topic for imaging studies in recent years. As previous studies mainly focus on oncological applications, we report our recent efforts of applying such techniques on cardiac perfusion imaging. A fully automated procedure has been developed to perform texture analysis for measuring the image heterogeneity. Clinical data were used to evaluate the preliminary performance of such methods. Methods: Myocardial perfusion images of Thallium-201 scans were collected from 293 patients with suspected coronary artery disease. Each subject underwent a Tl-201 scan and a percutaneous coronary intervention (PCI) within three months. The PCImore » Result was used as the gold standard of coronary ischemia of more than 70% stenosis. Each Tl-201 scan was spatially normalized to an image template for fully automatic segmentation of the LV. The segmented voxel intensities were then carried into the texture analysis with our open-source software Chang Gung Image Texture Analysis toolbox (CGITA). To evaluate the clinical performance of the image heterogeneity for detecting the coronary stenosis, receiver operating characteristic (ROC) analysis was used to compute the overall accuracy, sensitivity and specificity as well as the area under curve (AUC). Those indices were compared to those obtained from the commercially available semi-automatic software QPS. Results: With the fully automatic procedure to quantify heterogeneity from Tl-201 scans, we were able to achieve a good discrimination with good accuracy (74%), sensitivity (73%), specificity (77%) and AUC of 0.82. Such performance is similar to those obtained from the semi-automatic QPS software that gives a sensitivity of 71% and specificity of 77%. Conclusion: Based on fully automatic procedures of data processing, our preliminary data indicate that the image heterogeneity of myocardial perfusion imaging can provide useful information for automatic determination of the myocardial ischemia.« less
Method for 3D noncontact measurements of cut trees package area
NASA Astrophysics Data System (ADS)
Knyaz, Vladimir A.; Vizilter, Yuri V.
2001-02-01
Progress in imaging sensors and computers create the background for numerous 3D imaging application for wide variety of manufacturing activity. Many demands for automated precise measurements are in wood branch of industry. One of them is the accurate volume definition for cut trees carried on the truck. The key point for volume estimation is determination of the front area of the cut tree package. To eliminate slow and inaccurate manual measurements being now in practice the experimental system for automated non-contact wood measurements is developed. The system includes two non-metric CCD video cameras, PC as central processing unit, frame grabbers and original software for image processing and 3D measurements. The proposed method of measurement is based on capturing the stereo pair of front of trees package and performing the image orthotranformation into the front plane. This technique allows to process transformed image for circle shapes recognition and calculating their area. The metric characteristics of the system are provided by special camera calibration procedure. The paper presents the developed method of 3D measurements, describes the hardware used for image acquisition and the software realized the developed algorithms, gives the productivity and precision characteristics of the system.
Code of Federal Regulations, 2014 CFR
2014-07-01
... sounds, images, or both, that are being transmitted, is ‘fixed’ for purposes of this title if a fixation... processing of Statements of Intent. (f) Effective date of restoration of copyright protection. (1) Potential...
Code of Federal Regulations, 2011 CFR
2011-07-01
... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...
Code of Federal Regulations, 2012 CFR
2012-07-01
... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...
Code of Federal Regulations, 2013 CFR
2013-07-01
... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...
Code of Federal Regulations, 2010 CFR
2010-07-01
... communicated for a period of more than transitory duration. A work consisting of sounds, images, or both, that... not requiring a fee for the processing of Statements of Intent. (f) Effective date of restoration of...
Software for Verifying Image-Correlation Tie Points
NASA Technical Reports Server (NTRS)
Klimeck, Gerhard; Yagi, Gary
2008-01-01
A computer program enables assessment of the quality of tie points in the image-correlation processes of the software described in the immediately preceding article. Tie points are computed in mappings between corresponding pixels in the left and right images of a stereoscopic pair. The mappings are sometimes not perfect because image data can be noisy and parallax can cause some points to appear in one image but not the other. The present computer program relies on the availability of a left- right correlation map in addition to the usual right left correlation map. The additional map must be generated, which doubles the processing time. Such increased time can now be afforded in the data-processing pipeline, since the time for map generation is now reduced from about 60 to 3 minutes by the parallelization discussed in the previous article. Parallel cluster processing time, therefore, enabled this better science result. The first mapping is typically from a point (denoted by coordinates x,y) in the left image to a point (x',y') in the right image. The second mapping is from (x',y') in the right image to some point (x",y") in the left image. If (x,y) and(x",y") are identical, then the mapping is considered perfect. The perfect-match criterion can be relaxed by introducing an error window that admits of round-off error and a small amount of noise. The mapping procedure can be repeated until all points in each image not connected to points in the other image are eliminated, so that what remains are verified correlation data.
Lam, France; Cladière, Damien; Guillaume, Cyndélia; Wassmann, Katja; Bolte, Susanne
2017-02-15
In the presented work we aimed at improving confocal imaging to obtain highest possible resolution in thick biological samples, such as the mouse oocyte. We therefore developed an image processing workflow that allows improving the lateral and axial resolution of a standard confocal microscope. Our workflow comprises refractive index matching, the optimization of microscope hardware parameters and image restoration by deconvolution. We compare two different deconvolution algorithms, evaluate the necessity of denoising and establish the optimal image restoration procedure. We validate our workflow by imaging sub resolution fluorescent beads and measuring the maximum lateral and axial resolution of the confocal system. Subsequently, we apply the parameters to the imaging and data restoration of fluorescently labelled meiotic spindles of mouse oocytes. We measure a resolution increase of approximately 2-fold in the lateral and 3-fold in the axial direction throughout a depth of 60μm. This demonstrates that with our optimized workflow we reach a resolution that is comparable to 3D-SIM-imaging, but with better depth penetration for confocal images of beads and the biological sample. Copyright © 2016 Elsevier Inc. All rights reserved.
Focus measure method based on the modulus of the gradient of the color planes for digital microscopy
NASA Astrophysics Data System (ADS)
Hurtado-Pérez, Román; Toxqui-Quitl, Carina; Padilla-Vivanco, Alfonso; Aguilar-Valdez, J. Félix; Ortega-Mendoza, Gabriel
2018-02-01
The modulus of the gradient of the color planes (MGC) is implemented to transform multichannel information to a grayscale image. This digital technique is used in two applications: (a) focus measurements during autofocusing (AF) process and (b) extending the depth of field (EDoF) by means of multifocus image fusion. In the first case, the MGC procedure is based on an edge detection technique and is implemented in over 15 focus metrics that are typically handled in digital microscopy. The MGC approach is tested on color images of histological sections for the selection of in-focus images. An appealing attribute of all the AF metrics working in the MGC space is their monotonic behavior even up to a magnification of 100×. An advantage of the MGC method is its computational simplicity and inherent parallelism. In the second application, a multifocus image fusion algorithm based on the MGC approach has been implemented on graphics processing units (GPUs). The resulting fused images are evaluated using a nonreference image quality metric. The proposed fusion method reveals a high-quality image independently of faulty illumination during the image acquisition. Finally, the three-dimensional visualization of the in-focus image is shown.
The Effect of Underwater Imagery Radiometry on 3d Reconstruction and Orthoimagery
NASA Astrophysics Data System (ADS)
Agrafiotis, P.; Drakonakis, G. I.; Georgopoulos, A.; Skarlatos, D.
2017-02-01
The work presented in this paper investigates the effect of the radiometry of the underwater imagery on automating the 3D reconstruction and the produced orthoimagery. Main aim is to investigate whether pre-processing of the underwater imagery improves the 3D reconstruction using automated SfM - MVS software or not. Since the processing of images either separately or in batch is a time-consuming procedure, it is critical to determine the necessity of implementing colour correction and enhancement before the SfM - MVS procedure or directly to the final orthoimage when the orthoimagery is the deliverable. Two different test sites were used to capture imagery ensuring different environmental conditions, depth and complexity. Three different image correction methods are applied: A very simple automated method using Adobe Photoshop, a developed colour correction algorithm using the CLAHE (Zuiderveld, 1994) method and an implementation of the algorithm described in Bianco et al., (2015). The produced point clouds using the initial and the corrected imagery are then being compared and evaluated.
Nesbit, Steven C.; Van Hoof, Alexander G.; Le, Chi C.; Dearworth, James R.
2015-01-01
Few laboratory exercises have been developed using the crayfish as a model for teaching how neural processing is done by sensory organs that detect light stimuli. This article describes the dissection procedures and methods for conducting extracellular recording from light responses of both the optic nerve fibers found in the animal’s eyestalk and from the caudal photoreceptor located in the ventral nerve cord. Instruction for ADInstruments’ data acquisition system is also featured for the data collection and analysis of responses. The comparison provides students a unique view on how spike activities measured from neurons code image-forming and non-image-forming processes. Results from the exercise show longer latency and lower frequency of firing by the caudal photoreceptor compared to optic nerve fibers to demonstrate evidence of different functions. After students learn the dissection, recording procedure, and the functional anatomy, they can develop their own experiments to learn more about the photoreceptive mechanisms and the sensory integration of modalities by these light-responsive interneurons. PMID:26557793
NASA Technical Reports Server (NTRS)
Bremmer, David M.; Hutcheson, Florence V.; Stead, Daniel J.
2005-01-01
A methodology to eliminate model reflection and system vibration effects from post processed particle image velocimetry data is presented. Reflection and vibration lead to loss of data, and biased velocity calculations in PIV processing. A series of algorithms were developed to alleviate these problems. Reflections emanating from the model surface caused by the laser light sheet are removed from the PIV images by subtracting an image in which only the reflections are visible from all of the images within a data acquisition set. The result is a set of PIV images where only the seeded particles are apparent. Fiduciary marks painted on the surface of the test model were used as reference points in the images. By locating the centroids of these marks it was possible to shift all of the images to a common reference frame. This image alignment procedure as well as the subtraction of model reflection are performed in a first algorithm. Once the images have been shifted, they are compared with a background image that was recorded under no flow conditions. The second and third algorithms find the coordinates of fiduciary marks in the acquisition set images and the background image and calculate the displacement between these images. The final algorithm shifts all of the images so that fiduciary mark centroids lie in the same location as the background image centroids. This methodology effectively eliminated the effects of vibration so that unbiased data could be used for PIV processing. The PIV data used for this work was generated at the NASA Langley Research Center Quiet Flow Facility. The experiment entailed flow visualization near the flap side edge region of an airfoil model. Commercial PIV software was used for data acquisition and processing. In this paper, the experiment and the PIV acquisition of the data are described. The methodology used to develop the algorithms for reflection and system vibration removal is stated, and the implementation, testing and validation of these algorithms are presented.
An automated dose tracking system for adaptive radiation therapy.
Liu, Chang; Kim, Jinkoo; Kumarasiri, Akila; Mayyas, Essa; Brown, Stephen L; Wen, Ning; Siddiqui, Farzan; Chetty, Indrin J
2018-02-01
The implementation of adaptive radiation therapy (ART) into routine clinical practice is technically challenging and requires significant resources to perform and validate each process step. The objective of this report is to identify the key components of ART, to illustrate how a specific automated procedure improves efficiency, and to facilitate the routine clinical application of ART. Data was used from patient images, exported from a clinical database and converted to an intermediate format for point-wise dose tracking and accumulation. The process was automated using in-house developed software containing three modularized components: an ART engine, user interactive tools, and integration tools. The ART engine conducts computing tasks using the following modules: data importing, image pre-processing, dose mapping, dose accumulation, and reporting. In addition, custom graphical user interfaces (GUIs) were developed to allow user interaction with select processes such as deformable image registration (DIR). A commercial scripting application programming interface was used to incorporate automated dose calculation for application in routine treatment planning. Each module was considered an independent program, written in C++or C#, running in a distributed Windows environment, scheduled and monitored by integration tools. The automated tracking system was retrospectively evaluated for 20 patients with prostate cancer and 96 patients with head and neck cancer, under institutional review board (IRB) approval. In addition, the system was evaluated prospectively using 4 patients with head and neck cancer. Altogether 780 prostate dose fractions and 2586 head and neck cancer dose fractions went processed, including DIR and dose mapping. On average, daily cumulative dose was computed in 3 h and the manual work was limited to 13 min per case with approximately 10% of cases requiring an additional 10 min for image registration refinement. An efficient and convenient dose tracking system for ART in the clinical setting is presented. The software and automated processes were rigorously evaluated and validated using patient image datasets. Automation of the various procedures has improved efficiency significantly, allowing for the routine clinical application of ART for improving radiation therapy effectiveness. Copyright © 2017 Elsevier B.V. All rights reserved.
Performance assessment of a data processing chain for THz imaging
NASA Astrophysics Data System (ADS)
Catapano, Ilaria; Ludeno, Giovanni; Soldovieri, Francesco
2017-04-01
Nowadays, TeraHertz (THz) imaging is deserving huge attention as very high resolution diagnostic tool in many applicative fields, among which security, cultural heritage, material characterization and civil engineering diagnostics. This widespread use of THz waves is due to their non-ionizing nature, their capability of penetrating into non-metallic opaque materials, as well as to the technological advances, which have allowed the commercialization of compact, flexible and portable systems. However, the effectiveness of THz imaging depends strongly on the adopted data processing aimed at improving the imaging performance of the hardware device. In particular, data processing is required to mitigate detrimental and unavoidable effects like noise, signal attenuation, as well as to correct the sample surface topography. With respect to data processing, we have proposed recently a strategy involving three different steps aimed at reducing noise, filtering out undesired signal introduced by the adopted THz system and performing surface topography correction [1]. The first step regards noise filtering and exploits a procedure based on the Singular Value Decomposition (SVD) [2] of the data matrix, which does not require knowledge of noise level and it does not involve the use of a reference signal. The second step aims at removing the undesired signal that we have experienced to be introduced by the adopted Z-Omega Fiber-Coupled Terahertz Time Domain (FICO) system. Indeed, when the system works in a high-speed mode, an undesired low amplitude peak occurs always at the same time instant from the beginning of the observation time window and needs to be removed from the useful data matrix in order to avoid a wrong interpretation of the imaging results. The third step of the considered data processing chain is a topographic correction, which needs in order to image properly the samples surface and its inner structure. Such a procedure performs an automatic alignment of the first peak of the measured waveforms by exploiting the a-priori information on the focus distance at which the specimen under test must be located during the measurement phase. The usefulness of the proposed data processing chain has been widely assessed in the last few months by surveying several specimens made by different materials and representative of objects of interest for civil engineering and cultural heritage diagnostics. At the conference, we will show in detail the signal processing chain and present several achieved results. REFERENCES [1] I. Catapano, F. Soldovieri, "A Data Processing Chain for Terahertz Imaging and Its Use in Artwork Diagnostics". J Infrared Milli Terahz Waves, pp.13, Nov. 2016. [2] M. Bertero and P. Boccacci (1998), Introduction to Inverse Problems in Imaging, Bristol: Institute of Physics Publishing.
Beattie, Bradley J; Klose, Alexander D; Le, Carl H; Longo, Valerie A; Dobrenkov, Konstantine; Vider, Jelena; Koutcher, Jason A; Blasberg, Ronald G
2009-01-01
The procedures we propose make possible the mapping of two-dimensional (2-D) bioluminescence image (BLI) data onto a skin surface derived from a three-dimensional (3-D) anatomical modality [magnetic resonance (MR) or computed tomography (CT)] dataset. This mapping allows anatomical information to be incorporated into bioluminescence tomography (BLT) reconstruction procedures and, when applied using sources visible to both optical and anatomical modalities, can be used to evaluate the accuracy of those reconstructions. Our procedures, based on immobilization of the animal and a priori determined fixed projective transforms, should be more robust and accurate than previously described efforts, which rely on a poorly constrained retrospectively determined warping of the 3-D anatomical information. Experiments conducted to measure the accuracy of the proposed registration procedure found it to have a mean error of 0.36+/-0.23 mm. Additional experiments highlight some of the confounds that are often overlooked in the BLT reconstruction process, and for two of these confounds, simple corrections are proposed.
An improved image alignment procedure for high-resolution transmission electron microscopy.
Lin, Fang; Liu, Yan; Zhong, Xiaoyan; Chen, Jianghua
2010-06-01
Image alignment is essential for image processing methods such as through-focus exit-wavefunction reconstruction and image averaging in high-resolution transmission electron microscopy. Relative image displacements exist in any experimentally recorded image series due to the specimen drifts and image shifts, hence image alignment for correcting the image displacements has to be done prior to any further image processing. The image displacement between two successive images is determined by the correlation function of the two relatively shifted images. Here it is shown that more accurate image alignment can be achieved by using an appropriate aperture to filter the high-frequency components of the images being aligned, especially for a crystalline specimen with little non-periodic information. For the image series of crystalline specimens with little amorphous, the radius of the filter aperture should be as small as possible, so long as it covers the innermost lattice reflections. Testing with an experimental through-focus series of Si[110] images, the accuracies of image alignment with different correlation functions are compared with respect to the error functions in through-focus exit-wavefunction reconstruction based on the maximum-likelihood method. Testing with image averaging over noisy experimental images from graphene and carbon-nanotube samples, clear and sharp crystal lattice fringes are recovered after applying optimal image alignment. Copyright 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Fabbrini, L.; Messina, M.; Greco, M.; Pinelli, G.
2011-10-01
In the context of augmented integrity Inertial Navigation System (INS), recent technological developments have been focusing on landmark extraction from high-resolution synthetic aperture radar (SAR) images in order to retrieve aircraft position and attitude. The article puts forward a processing chain that can automatically detect linear landmarks on highresolution synthetic aperture radar (SAR) images and can be successfully exploited also in the context of augmented integrity INS. The processing chain uses constant false alarm rate (CFAR) edge detectors as the first step of the whole processing procedure. Our studies confirm that the ratio of averages (RoA) edge detector detects object boundaries more effectively than Student T-test and Wilcoxon-Mann-Whitney (WMW) test. Nevertheless, all these statistical edge detectors are sensitive to violation of the assumptions which underlie their theory. In addition to presenting a solution to the previous problem, we put forward a new post-processing algorithm useful to remove the main false alarms, to select the most probable edge position, to reconstruct broken edges and finally to vectorize them. SAR images from the "MSTAR clutter" dataset were used to prove the effectiveness of the proposed algorithms.
New approach to gallbladder ultrasonic images analysis and lesions recognition.
Bodzioch, Sławomir; Ogiela, Marek R
2009-03-01
This paper presents a new approach to gallbladder ultrasonic image processing and analysis towards detection of disease symptoms on processed images. First, in this paper, there is presented a new method of filtering gallbladder contours from USG images. A major stage in this filtration is to segment and section off areas occupied by the said organ. In most cases this procedure is based on filtration that plays a key role in the process of diagnosing pathological changes. Unfortunately ultrasound images present among the most troublesome methods of analysis owing to the echogenic inconsistency of structures under observation. This paper provides for an inventive algorithm for the holistic extraction of gallbladder image contours. The algorithm is based on rank filtration, as well as on the analysis of histogram sections on tested organs. The second part concerns detecting lesion symptoms of the gallbladder. Automating a process of diagnosis always comes down to developing algorithms used to analyze the object of such diagnosis and verify the occurrence of symptoms related to given affection. Usually the final stage is to make a diagnosis based on the detected symptoms. This last stage can be carried out through either dedicated expert systems or more classic pattern analysis approach like using rules to determine illness basing on detected symptoms. This paper discusses the pattern analysis algorithms for gallbladder image interpretation towards classification of the most frequent illness symptoms of this organ.
Medical imaging and registration in computer assisted surgery.
Simon, D A; Lavallée, S
1998-09-01
Imaging, sensing, and computing technologies that are being introduced to aid in the planning and execution of surgical procedures are providing orthopaedic surgeons with a powerful new set of tools for improving clinical accuracy, reliability, and patient outcomes while reducing costs and operating times. Current computer assisted surgery systems typically include a measurement process for collecting patient specific medical data, a decision making process for generating a surgical plan, a registration process for aligning the surgical plan to the patient, and an action process for accurately achieving the goals specified in the plan. Some of the key concepts in computer assisted surgery applied to orthopaedics with a focus on the basic framework and underlying technologies is outlined. In addition, technical challenges and future trends in the field are discussed.
Image registration: enabling technology for image guided surgery and therapy.
Sauer, Frank
2005-01-01
Imaging looks inside the patient's body, exposing the patient's anatomy beyond what is visible on the surface. Medical imaging has a very successful history for medical diagnosis. It also plays an increasingly important role as enabling technology for minimally invasive procedures. Interventional procedures (e.g. catheter based cardiac interventions) are traditionally supported by intra-procedure imaging (X-ray fluoro, ultrasound). There is realtime feedback, but the images provide limited information. Surgical procedures are traditionally supported with pre-operative images (CT, MR). The image quality can be very good; however, the link between images and patient has been lost. For both cases, image registration can play an essential role -augmenting intra-op images with pre-op images, and mapping pre-op images to the patient's body. We will present examples of both approaches from an application oriented perspective, covering electrophysiology, radiation therapy, and neuro-surgery. Ultimately, as the boundaries between interventional radiology and surgery are becoming blurry, also the different methods for image guidance will merge. Image guidance will draw upon a combination of pre-op and intra-op imaging together with magnetic or optical tracking systems, and enable precise minimally invasive procedures. The information is registered into a common coordinate system, and allows advanced methods for visualization such as augmented reality or advanced methods for therapy delivery such as robotics.
Automatic x-ray image contrast enhancement based on parameter auto-optimization.
Qiu, Jianfeng; Harold Li, H; Zhang, Tiezhi; Ma, Fangfang; Yang, Deshan
2017-11-01
Insufficient image contrast associated with radiation therapy daily setup x-ray images could negatively affect accurate patient treatment setup. We developed a method to perform automatic and user-independent contrast enhancement on 2D kilo voltage (kV) and megavoltage (MV) x-ray images. The goal was to provide tissue contrast optimized for each treatment site in order to support accurate patient daily treatment setup and the subsequent offline review. The proposed method processes the 2D x-ray images with an optimized image processing filter chain, which consists of a noise reduction filter and a high-pass filter followed by a contrast limited adaptive histogram equalization (CLAHE) filter. The most important innovation is to optimize the image processing parameters automatically to determine the required image contrast settings per disease site and imaging modality. Three major parameters controlling the image processing chain, i.e., the Gaussian smoothing weighting factor for the high-pass filter, the block size, and the clip limiting parameter for the CLAHE filter, were determined automatically using an interior-point constrained optimization algorithm. Fifty-two kV and MV x-ray images were included in this study. The results were manually evaluated and ranked with scores from 1 (worst, unacceptable) to 5 (significantly better than adequate and visually praise worthy) by physicians and physicists. The average scores for the images processed by the proposed method, the CLAHE, and the best window-level adjustment were 3.92, 2.83, and 2.27, respectively. The percentage of the processed images received a score of 5 were 48, 29, and 18%, respectively. The proposed method is able to outperform the standard image contrast adjustment procedures that are currently used in the commercial clinical systems. When the proposed method is implemented in the clinical systems as an automatic image processing filter, it could be useful for allowing quicker and potentially more accurate treatment setup and facilitating the subsequent offline review and verification. © 2017 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Preliminary study of rib articulated model based on dynamic fluoroscopy images
NASA Astrophysics Data System (ADS)
Villard, Pierre-Frederic; Escamilla, Pierre; Kerrien, Erwan; Gorges, Sebastien; Trousset, Yves; Berger, Marie-Odile
2014-03-01
We present in this paper a preliminary study of rib motion tracking during Interventional Radiology (IR) fluoroscopy guided procedures. It consists in providing a physician with moving rib three-dimensional (3D) models projected in the fluoroscopy plane during a treatment. The strategy is to help to quickly recognize the target and the no-go areas i.e. the tumor and the organs to avoid. The method consists in i) elaborating a kinematic model of each rib from a preoperative computerized tomography (CT) scan, ii) processing the on-line fluoroscopy image and iii) optimizing the parameters of the kinematic law such as the transformed 3D rib projected on the medical image plane fit well with the previously processed image. The results show a visually good rib tracking that has been quantitatively validated by showing a periodic motion as well as a good synchronism between ribs.
On techniques for angle compensation in nonideal iris recognition.
Schuckers, Stephanie A C; Schmid, Natalia A; Abhyankar, Aditya; Dorairaj, Vivekanand; Boyce, Christopher K; Hornak, Lawrence A
2007-10-01
The popularity of the iris biometric has grown considerably over the past two to three years. Most research has been focused on the development of new iris processing and recognition algorithms for frontal view iris images. However, a few challenging directions in iris research have been identified, including processing of a nonideal iris and iris at a distance. In this paper, we describe two nonideal iris recognition systems and analyze their performance. The word "nonideal" is used in the sense of compensating for off-angle occluded iris images. The system is designed to process nonideal iris images in two steps: 1) compensation for off-angle gaze direction and 2) processing and encoding of the rotated iris image. Two approaches are presented to account for angular variations in the iris images. In the first approach, we use Daugman's integrodifferential operator as an objective function to estimate the gaze direction. After the angle is estimated, the off-angle iris image undergoes geometric transformations involving the estimated angle and is further processed as if it were a frontal view image. The encoding technique developed for a frontal image is based on the application of the global independent component analysis. The second approach uses an angular deformation calibration model. The angular deformations are modeled, and calibration parameters are calculated. The proposed method consists of a closed-form solution, followed by an iterative optimization procedure. The images are projected on the plane closest to the base calibrated plane. Biorthogonal wavelets are used for encoding to perform iris recognition. We use a special dataset of the off-angle iris images to quantify the performance of the designed systems. A series of receiver operating characteristics demonstrate various effects on the performance of the nonideal-iris-based recognition system.
NASA Astrophysics Data System (ADS)
Taboada, B.; Vega-Alvarado, L.; Córdova-Aguilar, M. S.; Galindo, E.; Corkidi, G.
2006-09-01
Characterization of multiphase systems occurring in fermentation processes is a time-consuming and tedious process when manual methods are used. This work describes a new semi-automatic methodology for the on-line assessment of diameters of oil drops and air bubbles occurring in a complex simulated fermentation broth. High-quality digital images were obtained from the interior of a mechanically stirred tank. These images were pre-processed to find segments of edges belonging to the objects of interest. The contours of air bubbles and oil drops were then reconstructed using an improved Hough transform algorithm which was tested in two, three and four-phase simulated fermentation model systems. The results were compared against those obtained manually by a trained observer, showing no significant statistical differences. The method was able to reduce the total processing time for the measurements of bubbles and drops in different systems by 21-50% and the manual intervention time for the segmentation procedure by 80-100%.
NASA Astrophysics Data System (ADS)
Champion, N.
2012-08-01
Contrary to aerial images, satellite images are often affected by the presence of clouds. Identifying and removing these clouds is one of the primary steps to perform when processing satellite images, as they may alter subsequent procedures such as atmospheric corrections, DSM production or land cover classification. The main goal of this paper is to present the cloud detection approach, developed at the French Mapping agency. Our approach is based on the availability of multi-temporal satellite images (i.e. time series that generally contain between 5 and 10 images) and is based on a region-growing procedure. Seeds (corresponding to clouds) are firstly extracted through a pixel-to-pixel comparison between the images contained in time series (the presence of a cloud is here assumed to be related to a high variation of reflectance between two images). Clouds are then delineated finely using a dedicated region-growing algorithm. The method, originally designed for panchromatic SPOT5-HRS images, is tested in this paper using time series with 9 multi-temporal satellite images. Our preliminary experiments show the good performances of our method. In a near future, the method will be applied to Pléiades images, acquired during the in-flight commissioning phase of the satellite (launched at the end of 2011). In that context, this is a particular goal of this paper to show to which extent and in which way our method can be adapted to this kind of imagery.
Early differential processing of material images: Evidence from ERP classification.
Wiebel, Christiane B; Valsecchi, Matteo; Gegenfurtner, Karl R
2014-06-24
Investigating the temporal dynamics of natural image processing using event-related potentials (ERPs) has a long tradition in object recognition research. In a classical Go-NoGo task two characteristic effects have been emphasized: an early task independent category effect and a later task-dependent target effect. Here, we set out to use this well-established Go-NoGo paradigm to study the time course of material categorization. Material perception has gained more and more interest over the years as its importance in natural viewing conditions has been ignored for a long time. In addition to analyzing standard ERPs, we conducted a single trial ERP pattern analysis. To validate this procedure, we also measured ERPs in two object categories (people and animals). Our linear classification procedure was able to largely capture the overall pattern of results from the canonical analysis of the ERPs and even extend it. We replicate the known target effect (differential Go-NoGo potential at frontal sites) for the material images. Furthermore, we observe task-independent differential activity between the two material categories as early as 140 ms after stimulus onset. Using our linear classification approach, we show that material categories can be differentiated consistently based on the ERP pattern in single trials around 100 ms after stimulus onset, independent of the target-related status. This strengthens the idea of early differential visual processing of material categories independent of the task, probably due to differences in low-level image properties and suggests pattern classification of ERP topographies as a strong instrument for investigating electrophysiological brain activity. © 2014 ARVO.
Procedure for Automated Eddy Current Crack Detection in Thin Titanium Plates
NASA Technical Reports Server (NTRS)
Wincheski, Russell A.
2012-01-01
This procedure provides the detailed instructions for conducting Eddy Current (EC) inspections of thin (5-30 mils) titanium membranes with thickness and material properties typical of the development of Ultra-Lightweight diaphragm Tanks Technology (ULTT). The inspection focuses on the detection of part-through, surface breaking fatigue cracks with depths between approximately 0.002" and 0.007" and aspect ratios (a/c) of 0.2-1.0 using an automated eddy current scanning and image processing technique.
Spectral and textural processing of ERTS imagery. [Kansas
NASA Technical Reports Server (NTRS)
Haralick, R. M.; Bosley, R. J.
1974-01-01
A procedure is developed to simultaneously extract textural features from all bands of ERTS multispectral scanner imagery for automatic analysis. Multi-images lead to excessively large grey tone N-tuple co-occurrence matrices; therefore, neighboring grey N-tuple differences are measured and an ellipsoidally symmetric functional form is assumed for the co-occurrence distribution of multiimage greytone N-tuple differences. On the basis of past data the ellipsoidally symmetric approximation is shown to be reasonable. Initial evaluation of the procedure is encouraging.
Imaging cell competition in Drosophila imaginal discs.
Ohsawa, Shizue; Sugimura, Kaoru; Takino, Kyoko; Igaki, Tatsushi
2012-01-01
Cell competition is a process in which cells with higher fitness ("winners") survive and proliferate at the expense of less fit neighbors ("losers"). It has been suggested that cell competition is involved in a variety of biological processes such as organ size control, tissue homeostasis, cancer progression, and the maintenance of stem cell population. By advent of a genetic mosaic technique, which enables to generate fluorescently marked somatic clones in Drosophila imaginal discs, recent studies have presented some aspects of molecular mechanisms underlying cell competition. Now, with a live-imaging technique using ex vivo-cultured imaginal discs, we can dissect the spatiotemporal nature of competitive cell behaviors within multicellular communities. Here, we describe procedures and tips for live imaging of cell competition in Drosophila imaginal discs. Copyright © 2012 Elsevier Inc. All rights reserved.
Searching early bone metastasis on plain radiography by using digital imaging processing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaramillo-Nunez, A.; Perez-Meza, M.; Universidad de la Sierra Sur, C. P. 70800, Miahuatlan, Oax.
2012-10-23
Some authors mention that it is not possible to detect early bone metastasis on plain radiography. In this work we use digital imaging processing to analyze three radiographs taken from a patient with bone metastasis discomfort on the right shoulder. The time period among the first and second radiography was approximately one month and between the first and the third one year. This procedure is a first approach in order to know if in this particular case it was possible to detect an early bone metastasis. The obtained results suggest that by carrying out a digital processing is possible tomore » detect the metastasis since the radiography contains the information although visually it is not possible to observe it.« less
Chain of evidence generation for contrast enhancement in digital image forensics
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Messina, Giuseppe; Strano, Daniela
2010-01-01
The quality of the images obtained by digital cameras has improved a lot since digital cameras early days. Unfortunately, it is not unusual in image forensics to find wrongly exposed pictures. This is mainly due to obsolete techniques or old technologies, but also due to backlight conditions. To extrapolate some invisible details a stretching of the image contrast is obviously required. The forensics rules to produce evidences require a complete documentation of the processing steps, enabling the replication of the entire process. The automation of enhancement techniques is thus quite difficult and needs to be carefully documented. This work presents an automatic procedure to find contrast enhancement settings, allowing both image correction and automatic scripting generation. The technique is based on a preprocessing step which extracts the features of the image and selects correction parameters. The parameters are thus saved through a JavaScript code that is used in the second step of the approach to correct the image. The generated script is Adobe Photoshop compliant (which is largely used in image forensics analysis) thus permitting the replication of the enhancement steps. Experiments on a dataset of images are also reported showing the effectiveness of the proposed methodology.
Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen
2013-06-01
Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.
V-Sipal - a Virtual Laboratory for Satellite Image Processing and Analysis
NASA Astrophysics Data System (ADS)
Buddhiraju, K. M.; Eeti, L.; Tiwari, K. K.
2011-09-01
In this paper a virtual laboratory for the Satellite Image Processing and Analysis (v-SIPAL) being developed at the Indian Institute of Technology Bombay is described. v-SIPAL comprises a set of experiments that are normally carried out by students learning digital processing and analysis of satellite images using commercial software. Currently, the experiments that are available on the server include Image Viewer, Image Contrast Enhancement, Image Smoothing, Edge Enhancement, Principal Component Transform, Texture Analysis by Co-occurrence Matrix method, Image Indices, Color Coordinate Transforms, Fourier Analysis, Mathematical Morphology, Unsupervised Image Classification, Supervised Image Classification and Accuracy Assessment. The virtual laboratory includes a theory module for each option of every experiment, a description of the procedure to perform each experiment, the menu to choose and perform the experiment, a module on interpretation of results when performed with a given image and pre-specified options, bibliography, links to useful internet resources and user-feedback. The user can upload his/her own images for performing the experiments and can also reuse outputs of one experiment in another experiment where applicable. Some of the other experiments currently under development include georeferencing of images, data fusion, feature evaluation by divergence andJ-M distance, image compression, wavelet image analysis and change detection. Additions to the theory module include self-assessment quizzes, audio-video clips on selected concepts, and a discussion of elements of visual image interpretation. V-SIPAL is at the satge of internal evaluation within IIT Bombay and will soon be open to selected educational institutions in India for evaluation.
Reconstruction of biofilm images: combining local and global structural parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resat, Haluk; Renslow, Ryan S.; Beyenal, Haluk
2014-10-20
Digitized images can be used for quantitative comparison of biofilms grown under different conditions. Using biofilm image reconstruction, it was previously found that biofilms with a completely different look can have nearly identical structural parameters and that the most commonly utilized global structural parameters were not sufficient to uniquely define these biofilms. Here, additional local and global parameters are introduced to show that these parameters considerably increase the reliability of the image reconstruction process. Assessment using human evaluators indicated that the correct identification rate of the reconstructed images increased from 50% to 72% with the introduction of the new parametersmore » into the reconstruction procedure. An expanded set of parameters especially improved the identification of biofilm structures with internal orientational features and of structures in which colony sizes and spatial locations varied. Hence, the newly introduced structural parameter sets helped to better classify the biofilms by incorporating finer local structural details into the reconstruction process.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nugraha, Andri Dian; Adisatrio, Philipus Ronnie
2013-09-09
Seismic refraction survey is one of geophysical method useful for imaging earth interior, definitely for imaging near surface. One of the common problems in seismic refraction survey is weak amplitude due to attenuations at far offset. This phenomenon will make it difficult to pick first refraction arrival, hence make it challenging to produce the near surface image. Seismic interferometry is a new technique to manipulate seismic trace for obtaining Green's function from a pair of receiver. One of its uses is for improving first refraction arrival quality at far offset. This research shows that we could estimate physical properties suchmore » as seismic velocity and thickness from virtual refraction processing. Also, virtual refraction could enhance the far offset signal amplitude since there is stacking procedure involved in it. Our results show super - virtual refraction processing produces seismic image which has higher signal-to-noise ratio than its raw seismic image. In the end, the numbers of reliable first arrival picks are also increased.« less
Quantum imaging with incoherently scattered light from a free-electron laser
NASA Astrophysics Data System (ADS)
Schneider, Raimund; Mehringer, Thomas; Mercurio, Giuseppe; Wenthaus, Lukas; Classen, Anton; Brenner, Günter; Gorobtsov, Oleg; Benz, Adrian; Bhatti, Daniel; Bocklage, Lars; Fischer, Birgit; Lazarev, Sergey; Obukhov, Yuri; Schlage, Kai; Skopintsev, Petr; Wagner, Jochen; Waldmann, Felix; Willing, Svenja; Zaluzhnyy, Ivan; Wurth, Wilfried; Vartanyants, Ivan A.; Röhlsberger, Ralf; von Zanthier, Joachim
2018-02-01
The advent of accelerator-driven free-electron lasers (FEL) has opened new avenues for high-resolution structure determination via diffraction methods that go far beyond conventional X-ray crystallography methods. These techniques rely on coherent scattering processes that require the maintenance of first-order coherence of the radiation field throughout the imaging procedure. Here we show that higher-order degrees of coherence, displayed in the intensity correlations of incoherently scattered X-rays from an FEL, can be used to image two-dimensional objects with a spatial resolution close to or even below the Abbe limit. This constitutes a new approach towards structure determination based on incoherent processes, including fluorescence emission or wavefront distortions, generally considered detrimental for imaging applications. Our method is an extension of the landmark intensity correlation measurements of Hanbury Brown and Twiss to higher than second order, paving the way towards determination of structure and dynamics of matter in regimes where coherent imaging methods have intrinsic limitations.
Preparing images for publication: part 2.
Bengel, Wolfgang; Devigus, Alessandro
2006-08-01
The transition from conventional to digital photography presents many advantages for authors and photographers in the field of dentistry, but also many complexities and potential problems. No uniform procedures for authors and publishers exist at present for producing high-quality dental photographs. This two-part article aims to provide guidelines for preparing images for publication and improving communication between these two parties. Part 1 provided information about basic color principles, factors that can affect color perception, and digital color management. Part 2 describes the camera setup, discusses how to take a photograph suitable for publication, and outlines steps for the image editing process.
Flat-panel cone-beam CT: a novel imaging technology for image-guided procedures
NASA Astrophysics Data System (ADS)
Siewerdsen, Jeffrey H.; Jaffray, David A.; Edmundson, Gregory K.; Sanders, W. P.; Wong, John W.; Martinez, Alvaro A.
2001-05-01
The use of flat-panel imagers for cone-beam CT signals the emergence of an attractive technology for volumetric imaging. Recent investigations demonstrate volume images with high spatial resolution and soft-tissue visibility and point to a number of logistical characteristics (e.g., open geometry, volume acquisition in a single rotation about the patient, and separation of the imaging and patient support structures) that are attractive to a broad spectrum of applications. Considering application to image-guided (IG) procedures - specifically IG therapies - this paper examines the performance of flat-panel cone-beam CT in relation to numerous constraints and requirements, including time (i.e., speed of image acquisition), dose, and field-of-view. The imaging and guidance performance of a prototype flat panel cone-beam CT system is investigated through the construction of procedure-specific tasks that test the influence of image artifacts (e.g., x-ray scatter and beam-hardening) and volumetric imaging performance (e.g., 3D spatial resolution, noise, and contrast) - taking two specific examples in IG brachytherapy and IG vertebroplasty. For IG brachytherapy, a procedure-specific task is constructed which tests the performance of flat-panel cone-beam CT in measuring the volumetric distribution of Pd-103 permanent implant seeds in relation to neighboring bone and soft-tissue structures in a pelvis phantom. For IG interventional procedures, a procedure-specific task is constructed in the context of vertebroplasty performed on a cadaverized ovine spine, demonstrating the volumetric image quality in pre-, intra-, and post-therapeutic images of the region of interest and testing the performance of the system in measuring the volumetric distribution of bone cement (PMMA) relative to surrounding spinal anatomy. Each of these tasks highlights numerous promising and challenging aspects of flat-panel cone-beam CT applied to IG procedures.
Recent progress in the development of ISO 19751
NASA Astrophysics Data System (ADS)
Farnand, Susan P.; Dalal, Edul N.; Ng, Yee S.
2006-01-01
A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-1 3 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes. 4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.
Spectral imaging of histological and cytological specimens
NASA Astrophysics Data System (ADS)
Rothmann, Chana; Malik, Zvi
1999-05-01
Evaluation of cell morphology by bright field microscopy is the pillar of histopathological diagnosis. The need for quantitative and objective parameters for diagnosis has given rise to the development of morphometric methods. The development of spectral imaging for biological and medical applications introduced both fields to large amounts of information extracted from a single image. Spectroscopic analysis is based on the ability of a stained histological specimen to absorb, reflect, or emit photons in ways characteristic to its interactions with specific dyes. Spectral information obtained from a histological specimen is stored in a cube whose appellate signifies the two spatial dimensions of a flat sample (x and y) and the third dimension, the spectrum, representing the light intensity for every wavelength. The spectral information stored in the cube can be further processed by morphometric analysis and quantitative procedures. Such a procedure is spectral-similarity mapping (SSM), which enables the demarcation of areas occupied by the same type of material. SSM constructs new images of the specimen, revealing areas with similar stain-macromolecule characteristics and enhancing subcellular features. Spectral imaging combined with SSM reveals nuclear organization through the differentiation stages as well as in apoptotic and necrotic conditions and identifies specifically the nucleoli domains.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
NASA Astrophysics Data System (ADS)
Gogler, Slawomir; Bieszczad, Grzegorz; Krupinski, Michal
2013-10-01
Thermal imagers and used therein infrared array sensors are subject to calibration procedure and evaluation of their voltage sensitivity on incident radiation during manufacturing process. The calibration procedure is especially important in so-called radiometric cameras, where accurate radiometric quantities, given in physical units, are of concern. Even though non-radiometric cameras are not expected to stand up to such elevated standards, it is still important, that the image faithfully represents temperature variations across the scene. Detectors used in thermal camera are illuminated by infrared radiation transmitted through an infrared transmitting optical system. Often an optical system, when exposed to uniform Lambertian source forms a non-uniform irradiation distribution in its image plane. In order to be able to carry out an accurate non-uniformity correction it is essential to correctly predict irradiation distribution from a uniform source. In the article a non-uniformity correction method has been presented, that takes into account optical system's radiometry. Predictions of the irradiation distribution have been confronted with measured irradiance values. Presented radiometric model allows fast and accurate non-uniformity correction to be carried out.
Marescaux, Jacques; Solerc, Luc
2004-06-01
Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.
The edge detection method of the infrared imagery of the laser spot
NASA Astrophysics Data System (ADS)
Che, Jinxi; Zhang, Jinchun; Li, Zhongmin
2016-01-01
In the jamming effectiveness experiments, in which the thermal infrared imager was interfered by the CO2 Laser, in order to evaluate the jamming effect of the thermal infrared imager by the CO2 Laser, it was needed to analyses the obtained infrared imagery of laser spot. Because the laser spot pictures obtained from the thermal infrared imager are irregular, the edge detection is an important process. The image edge is one of the most basic characteristics of the image, and it contains most of the information of the image. Generally, because of the thermal balance effect, the partly temperature of objective is no quite difference; therefore the infrared imagery's ability of reflecting the local detail of object is obvious week. At the same time, when the information of heat distribution of the thermal imagery was combined with the basic information of target, such as the object size, the relative position of field of view, shape and outline, and so on, the information just has more value. Hence, it is an important step for making image processing to extract the objective edge of the infrared imagery. Meanwhile it is an important part of image processing procedure and it is the premise of many subsequent processing. So as to extract outline information of the target from the original thermal imagery, and overcome the disadvantage, such as the low image contrast of the image and serious noise interference, and so on, the edge of thermal imagery needs detecting and processing. The principles of the Roberts, Sobel, Prewitt and Canny operator were analyzed, and then they were used to making edge detection on the thermal imageries of laser spot, which were obtained from the jamming effect experiments of CO2 laser jamming the thermal infrared imager. On the basis of the detection result, their performances were compared. At the end, the characteristics of the operators were summarized, which provide reference for the choice of edge detection operators in thermal imagery processing in future.
Simulation of a complete X-ray digital radiographic system for industrial applications.
Nazemi, E; Rokrok, B; Movafeghi, A; Choopan Dastjerdi, M H
2018-05-19
Simulating X-ray images is of great importance in industry and medicine. Using such simulation permits us to optimize parameters which affect image's quality without the limitations of an experimental procedure. This study revolves around a novel methodology to simulate a complete industrial X-ray digital radiographic system composed of an X-ray tube and a computed radiography (CR) image plate using Monte Carlo N Particle eXtended (MCNPX) code. In the process of our research, an industrial X-ray tube with maximum voltage of 300 kV and current of 5 mA was simulated. A 3-layer uniform plate including a polymer overcoat layer, a phosphor layer and a polycarbonate backing layer was also defined and simulated as the CR imaging plate. To model the image formation in the image plate, at first the absorbed dose was calculated in each pixel inside the phosphor layer of CR imaging plate using the mesh tally in MCNPX code and then was converted to gray value using a mathematical relationship determined in a separate procedure. To validate the simulation results, an experimental setup was designed and the images of two step wedges created out of aluminum and steel were captured by the experiments and compared with the simulations. The results show that the simulated images are in good agreement with the experimental ones demonstrating the ability of the proposed methodology for simulating an industrial X-ray imaging system. Copyright © 2018 Elsevier Ltd. All rights reserved.
An automated and universal method for measuring mean grain size from a digital image of sediment
Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.
2010-01-01
Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.
NASA Astrophysics Data System (ADS)
Hu, Jinyan; Li, Li; Yang, Yunfeng
2017-06-01
The hierarchical and successive approximate registration method of non-rigid medical image based on the thin-plate splines is proposed in the paper. There are two major novelties in the proposed method. First, the hierarchical registration based on Wavelet transform is used. The approximate image of Wavelet transform is selected as the registered object. Second, the successive approximation registration method is used to accomplish the non-rigid medical images registration, i.e. the local regions of the couple images are registered roughly based on the thin-plate splines, then, the current rough registration result is selected as the object to be registered in the following registration procedure. Experiments show that the proposed method is effective in the registration process of the non-rigid medical images.
Quantitative Image Restoration in Bright Field Optical Microscopy.
Gutiérrez-Medina, Braulio; Sánchez Miranda, Manuel de Jesús
2017-11-07
Bright field (BF) optical microscopy is regarded as a poor method to observe unstained biological samples due to intrinsic low image contrast. We introduce quantitative image restoration in bright field (QRBF), a digital image processing method that restores out-of-focus BF images of unstained cells. Our procedure is based on deconvolution, using a point spread function modeled from theory. By comparing with reference images of bacteria observed in fluorescence, we show that QRBF faithfully recovers shape and enables quantify size of individual cells, even from a single input image. We applied QRBF in a high-throughput image cytometer to assess shape changes in Escherichia coli during hyperosmotic shock, finding size heterogeneity. We demonstrate that QRBF is also applicable to eukaryotic cells (yeast). Altogether, digital restoration emerges as a straightforward alternative to methods designed to generate contrast in BF imaging for quantitative analysis. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Cancer Risks Associated with External Radiation From Diagnostic Imaging Procedures
Linet, Martha S.; Slovis, Thomas L.; Miller, Donald L.; Kleinerman, Ruth; Lee, Choonsik; Rajaraman, Preetha; de Gonzalez, Amy Berrington
2012-01-01
The 600% increase in medical radiation exposure to the US population since 1980 has provided immense benefit, but potential future cancer risks to patients. Most of the increase is from diagnostic radiologic procedures. The objectives of this review are to summarize epidemiologic data on cancer risks associated with diagnostic procedures, describe how exposures from recent diagnostic procedures relate to radiation levels linked with cancer occurrence, and propose a framework of strategies to reduce radiation from diagnostic imaging in patients. We briefly review radiation dose definitions, mechanisms of radiation carcinogenesis, key epidemiologic studies of medical and other radiation sources and cancer risks, and dose trends from diagnostic procedures. We describe cancer risks from experimental studies, future projected risks from current imaging procedures, and the potential for higher risks in genetically susceptible populations. To reduce future projected cancers from diagnostic procedures, we advocate widespread use of evidence-based appropriateness criteria for decisions about imaging procedures, oversight of equipment to deliver reliably the minimum radiation required to attain clinical objectives, development of electronic lifetime records of imaging procedures for patients and their physicians, and commitment by medical training programs, professional societies, and radiation protection organizations to educate all stakeholders in reducing radiation from diagnostic procedures. PMID:22307864
Efficient airport detection using region-based fully convolutional neural networks
NASA Astrophysics Data System (ADS)
Xin, Peng; Xu, Yuelei; Zhang, Xulei; Ma, Shiping; Li, Shuai; Lv, Chao
2018-04-01
This paper presents a model for airport detection using region-based fully convolutional neural networks. To achieve fast detection with high accuracy, we shared the conv layers between the region proposal procedure and the airport detection procedure and used graphics processing units (GPUs) to speed up the training and testing time. For lack of labeled data, we transferred the convolutional layers of ZF net pretrained by ImageNet to initialize the shared convolutional layers, then we retrained the model using the alternating optimization training strategy. The proposed model has been tested on an airport dataset consisting of 600 images. Experiments show that the proposed method can distinguish airports in our dataset from similar background scenes almost real-time with high accuracy, which is much better than traditional methods.
WE-E-12A-01: Medical Physics 1.0 to 2.0: MRI, Displays, Informatics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickens, D; Flynn, M; Peck, D
Medical Physics 2.0 is a bold vision for an existential transition of clinical imaging physics in face of the new realities of value-based and evidence-based medicine, comparative effectiveness, and meaningful use. It speaks to how clinical imaging physics can expand beyond traditional insular models of inspection and acceptance testing, oriented toward compliance, towards team-based models of operational engagement, prospective definition and assurance of effective use, and retrospective evaluation of clinical performance. Organized into four sessions of the AAPM, this particular session focuses on three specific modalities as outlined below. MRI 2.0: This presentation will look into the future of clinicalmore » MR imaging and what the clinical medical physicist will need to be doing as the technology of MR imaging evolves. Many of the measurement techniques used today will need to be expanded to address the advent of higher field imaging systems and dedicated imagers for specialty applications. Included will be the need to address quality assurance and testing metrics for multi-channel MR imagers and hybrid devices such as MR/PET systems. New pulse sequences and acquisition methods, increasing use of MR spectroscopy, and real-time guidance procedures will place the burden on the medical physicist to define and use new tools to properly evaluate these systems, but the clinical applications must be understood so that these tools are use correctly. Finally, new rules, clinical requirements, and regulations will mean that the medical physicist must actively work to keep her/his sites compliant and must work closely with physicians to ensure best performance of these systems. Informatics Display 1.0 to 2.0: Medical displays are an integral part of medical imaging operation. The DICOM and AAPM (TG18) efforts have led to clear definitions of performance requirements of monochrome medical displays that can be followed by medical physicists to ensure proper performance. However, effective implementation of that oversight has been challenging due to the number and extend of medical displays in use at a facility. The advent of color display and mobile displays has added additional challenges to the task of the medical physicist. This informatics display lecture first addresses the current display guidelines (the 1.0 paradigm) and further outlines the initiatives and prospects for color and mobile displays (the 2.0 paradigm). Informatics Management 1.0 to 2.0: Imaging informatics is part of every radiology practice today. Imaging informatics covers everything from the ordering of a study, through the data acquisition and processing, display and archiving, reporting of findings and the billing for the services performed. The standardization of the processes used to manage the information and methodologies to integrate these standards is being developed and advanced continuously. These developments are done in an open forum and imaging organizations and professionals all have a part in the process. In the Informatics Management presentation, the flow of information and the integration of the standards used in the processes will be reviewed. The role of radiologists and physicists in the process will be discussed. Current methods (the 1.0 paradigm) and evolving methods (the 2.0 paradigm) for validation of informatics systems function will also be discussed. Learning Objectives: Identify requirements for improving quality assurance and compliance tools for advanced and hybrid MRI systems. Identify the need for new quality assurance metrics and testing procedures for advanced systems. Identify new hardware systems and new procedures needed to evaluate MRI systems. Understand the components of current medical physics expectation for medical displays. Understand the role and prospect fo medical physics for color and mobile display devices. Understand different areas of imaging informatics and the methodology for developing informatics standards. Understand the current status of informatics standards and the role of physicists and radiologists in the process, and the current technology for validating the function of these systems.« less
Babu, Harish; Lagman, Carlito; Kim, Terrence T.; Grode, Marshall; Johnson, J. Patrick; Drazin, Doniel
2017-01-01
Background: Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Case Descriptions: Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Conclusion: Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients. PMID:29026672
Babu, Harish; Lagman, Carlito; Kim, Terrence T; Grode, Marshall; Johnson, J Patrick; Drazin, Doniel
2017-01-01
Bertolotti's syndrome is characterized by enlargement of the transverse process at the most caudal lumbar vertebra with a pseudoarticulation between the transverse process and sacral ala. Here, we describe the use of intraoperative three-dimensional image-guided navigation in the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Two patients diagnosed with Bertolotti's syndrome who had undergone the above-mentioned procedure were identified. The patients were 17- and 38-years-old, and presented with severe, chronic low back pain that was resistant to conservative treatment. Imaging revealed lumbosacral transitional vertebrae at the level of L5-S1, which was consistent with Bertolotti's syndrome. Injections of the pseudoarticulations resulted in only temporary symptomatic relief. Thus, the patients subsequently underwent O-arm neuronavigational resection of the bony defects. Both patients experienced immediate pain resolution (documented on the postoperative notes) and remained asymptomatic 1 year later. Intraoperative three-dimensional imaging and navigation guidance facilitated the resection of anomalous transverse processes in two patients with Bertolotti's syndrome. Excellent outcomes were achieved in both patients.
Assessment of Restoration Methods of X-Ray Images with Emphasis on Medical Photogrammetric Usage
NASA Astrophysics Data System (ADS)
Hosseinian, S.; Arefi, H.
2016-06-01
Nowadays, various medical X-ray imaging methods such as digital radiography, computed tomography and fluoroscopy are used as important tools in diagnostic and operative processes especially in the computer and robotic assisted surgeries. The procedures of extracting information from these images require appropriate deblurring and denoising processes on the pre- and intra-operative images in order to obtain more accurate information. This issue becomes more considerable when the X-ray images are planned to be employed in the photogrammetric processes for 3D reconstruction from multi-view X-ray images since, accurate data should be extracted from images for 3D modelling and the quality of X-ray images affects directly on the results of the algorithms. For restoration of X-ray images, it is essential to consider the nature and characteristics of these kinds of images. X-ray images exhibit severe quantum noise due to limited X-ray photons involved. The assumptions of Gaussian modelling are not appropriate for photon-limited images such as X-ray images, because of the nature of signal-dependant quantum noise. These images are generally modelled by Poisson distribution which is the most common model for low-intensity imaging. In this paper, existing methods are evaluated. For this purpose, after demonstrating the properties of medical X-ray images, the more efficient and recommended methods for restoration of X-ray images would be described and assessed. After explaining these approaches, they are implemented on samples from different kinds of X-ray images. By considering the results, it is concluded that using PURE-LET, provides more effective and efficient denoising than other examined methods in this research.
The retrodural space of Okada.
Murthy, Naveen S; Maus, Timothy P; Aprill, Charles
2011-06-01
The retrodural space of Okada is a potential space that can act as a conduit for the spread of inflammatory or infectious processes, connecting ipsilateral adjacent facet joints, contralateral adjacent facet joints, adjacent neural foramen, paraspinal musculature, and spinous process adventitial bursa (i.e., Baastrup disease). Awareness of these potential retrodural communications during diagnostic imaging interpretation and interventional spine injection procedures can play an important role in patient care and management.
Power, Jonathan D; Plitt, Mark; Kundu, Prantik; Bandettini, Peter A; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10-50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion).
Plitt, Mark; Kundu, Prantik; Bandettini, Peter A.; Martin, Alex
2017-01-01
Head motion can be estimated at any point of fMRI image processing. Processing steps involving temporal interpolation (e.g., slice time correction or outlier replacement) often precede motion estimation in the literature. From first principles it can be anticipated that temporal interpolation will alter head motion in a scan. Here we demonstrate this effect and its consequences in five large fMRI datasets. Estimated head motion was reduced by 10–50% or more following temporal interpolation, and reductions were often visible to the naked eye. Such reductions make the data seem to be of improved quality. Such reductions also degrade the sensitivity of analyses aimed at detecting motion-related artifact and can cause a dataset with artifact to falsely appear artifact-free. These reduced motion estimates will be particularly problematic for studies needing estimates of motion in time, such as studies of dynamics. Based on these findings, it is sensible to obtain motion estimates prior to any image processing (regardless of subsequent processing steps and the actual timing of motion correction procedures, which need not be changed). We also find that outlier replacement procedures change signals almost entirely during times of motion and therefore have notable similarities to motion-targeting censoring strategies (which withhold or replace signals entirely during times of motion). PMID:28880888
The View from Here: Emergence of Graphical Literacy
ERIC Educational Resources Information Center
Roberts, Kathryn L.; Brugar, Kristy A.
2017-01-01
The purpose of this study is to describe upper elementary students' understandings of four graphical devices that frequently occur in social studies texts: captioned images, maps, tables, and timelines. Using verbal protocol data collection procedures, we collected information on students' metacognitive processes when they were explicitly asked to…
Examples of Current and Future Uses of Neural-Net Image Processing for Aerospace Applications
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
Feed forward artificial neural networks are very convenient for performing correlated interpolation of pairs of complex noisy data sets as well as detecting small changes in image data. Image-to-image, image-to-variable and image-to-index applications have been tested at Glenn. Early demonstration applications are summarized including image-directed alignment of optics, tomography, flow-visualization control of wind-tunnel operations and structural-model-trained neural networks. A practical application is reviewed that employs neural-net detection of structural damage from interference fringe patterns. Both sensor-based and optics-only calibration procedures are available for this technique. These accomplishments have generated the knowledge necessary to suggest some other applications for NASA and Government programs. A tomography application is discussed to support Glenn's Icing Research tomography effort. The self-regularizing capability of a neural net is shown to predict the expected performance of the tomography geometry and to augment fast data processing. Other potential applications involve the quantum technologies. It may be possible to use a neural net as an image-to-image controller of an optical tweezers being used for diagnostics of isolated nano structures. The image-to-image transformation properties also offer the potential for simulating quantum computing. Computer resources are detailed for implementing the black box calibration features of the neural nets.
GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training.
Keelan, Robert; Shimada, Kenji; Rabin, Yoed
2017-02-01
This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze-thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface.
NASA Astrophysics Data System (ADS)
Cilip, Christopher M.; Allaf, Mohamad E.; Fried, Nathaniel M.
2012-04-01
A noninvasive approach to vasectomy may eliminate male fear of complications related to surgery and increase its acceptance. Noninvasive laser thermal occlusion of the canine vas deferens has recently been reported. Optical coherence tomography (OCT) and high-frequency ultrasound (HFUS) are compared for monitoring laser thermal coagulation of the vas in an acute canine model. Bilateral noninvasive laser coagulation of the vas was performed in six dogs (n=12 vasa) using a Ytterbium fiber laser wavelength of 1075 nm, incident power of 9.0 W, pulse duration of 500 ms, pulse rate of 1 Hz, and 3-mm-diameter spot. Cryogen spray cooling was used to prevent skin burns during the procedure. An OCT system with endoscopic probe and a HFUS system with 20-MHz transducer were used to image the vas immediately before and after the procedure. Vasa were then excised and processed for gross and histologic analysis for comparison with OCT and HFUS images. OCT provided high-resolution, superficial imaging of the compressed vas within the vas ring clamp, while HFUS provided deeper imaging of the vas held manually in the scrotal fold. Both OCT and high HFUS are promising imaging modalities for real-time confirmation of vas occlusion during noninvasive laser vasectomy.
GPU-Based Simulation of Ultrasound Imaging Artifacts for Cryosurgery Training
Keelan, Robert; Shimada, Kenji
2016-01-01
This study presents an efficient computational technique for the simulation of ultrasound imaging artifacts associated with cryosurgery based on nonlinear ray tracing. This study is part of an ongoing effort to develop computerized training tools for cryosurgery, with prostate cryosurgery as a development model. The capability of performing virtual cryosurgical procedures on a variety of test cases is essential for effective surgical training. Simulated ultrasound imaging artifacts include reverberation and reflection of the cryoprobes in the unfrozen tissue, reflections caused by the freezing front, shadowing caused by the frozen region, and tissue property changes in repeated freeze–thaw cycles procedures. The simulated artifacts appear to preserve the key features observed in a clinical setting. This study displays an example of how training may benefit from toggling between the undisturbed ultrasound image, the simulated temperature field, the simulated imaging artifacts, and an augmented hybrid presentation of the temperature field superimposed on the ultrasound image. The proposed method is demonstrated on a graphic processing unit at 100 frames per second, on a mid-range personal workstation, at two orders of magnitude faster than a typical cryoprocedure. This performance is based on computation with C++ accelerated massive parallelism and its interoperability with the DirectX-rendering application programming interface. PMID:26818026
Rapid motion compensation for prostate biopsy using GPU.
Shen, Feimo; Narayanan, Ramkrishnan; Suri, Jasjit S
2008-01-01
Image-guided procedures have become routine in medicine. Due to the nature of three-dimensional (3-D) structure of the target organs, two-dimensional (2-D) image acquisition is gradually being replaced by 3-D imaging. Specifically in the diagnosis of prostate cancer, biopsy can be performed using 3-D transrectal ultrasound (TRUS) image guidance. Because prostatic cancers are multifocal, it is crucial to accurately guide biopsy needles towards planned targets. Further the gland tends to move due to external physical disturbances, discomfort introduced by the procedure or intrinsic peristalsis. As a result the exact position of the gland must be rapidly updated so as to correspond with the originally acquired 3-D TRUS volume prior to biopsy planning. A graphics processing unit (GPU) is used in this study to compute rapid updates performing 3-D motion compensation via registration of the live 2-D image and the acquired 3-D TRUS volume. The parallel computational framework on the GPU is exploited resulting in mean compute times of 0.46 seconds for updating the position of a live 2-D buffer image containing 91,000 pixels. A 2x sub-sampling resulted in a further improvement to 0.19 seconds. With the increase in GPU multiprocessors and sub-sampling, we observe that real time motion compensation can be achieved.
Coherent multiscale image processing using dual-tree quaternion wavelets.
Chan, Wai Lam; Choi, Hyeokho; Baraniuk, Richard G
2008-07-01
The dual-tree quaternion wavelet transform (QWT) is a new multiscale analysis tool for geometric image features. The QWT is a near shift-invariant tight frame representation whose coefficients sport a magnitude and three phases: two phases encode local image shifts while the third contains image texture information. The QWT is based on an alternative theory for the 2-D Hilbert transform and can be computed using a dual-tree filter bank with linear computational complexity. To demonstrate the properties of the QWT's coherent magnitude/phase representation, we develop an efficient and accurate procedure for estimating the local geometrical structure of an image. We also develop a new multiscale algorithm for estimating the disparity between a pair of images that is promising for image registration and flow estimation applications. The algorithm features multiscale phase unwrapping, linear complexity, and sub-pixel estimation accuracy.
Narayanan, Shrikanth
2009-01-01
We describe a method for unsupervised region segmentation of an image using its spatial frequency domain representation. The algorithm was designed to process large sequences of real-time magnetic resonance (MR) images containing the 2-D midsagittal view of a human vocal tract airway. The segmentation algorithm uses an anatomically informed object model, whose fit to the observed image data is hierarchically optimized using a gradient descent procedure. The goal of the algorithm is to automatically extract the time-varying vocal tract outline and the position of the articulators to facilitate the study of the shaping of the vocal tract during speech production. PMID:19244005
Dental digital radiographic imaging.
Mauriello, S M; Platin, E
2001-01-01
Radiographs are an important adjunct to providing oral health care for the total patient. Historically, radiographic images have been produced using film-based systems. However, in recent years, with the arrival of new technologies, many practitioners have begun to incorporate digital radiographic imaging into their practices. Since dental hygienists are primarily responsible for exposing and processing radiographs in the provision of dental hygiene care, it is imperative that they become knowledgeable on the use and application of digital imaging in patient care and record keeping. The purpose of this course is to provide a comprehensive overview of digital radiography in dentistry. Specific components addressed are technological features, diagnostic software, advantages and disadvantages, technique procedures, and legal implications.
NASA Astrophysics Data System (ADS)
Hafizt, M.; Manessa, M. D. M.; Adi, N. S.; Prayudha, B.
2017-12-01
Benthic habitat mapping using satellite data is one challenging task for practitioners and academician as benthic objects are covered by light-attenuating water column obscuring object discrimination. One common method to reduce this water-column effect is by using depth-invariant index (DII) image. However, the application of the correction in shallow coastal areas is challenging as a dark object such as seagrass could have a very low pixel value, preventing its reliable identification and classification. This limitation can be solved by specifically applying a classification process to areas with different water depth levels. The water depth level can be extracted from satellite imagery using Relative Water Depth Index (RWDI). This study proposed a new approach to improve the mapping accuracy, particularly for benthic dark objects by combining the DII of Lyzenga’s water column correction method and the RWDI of Stumpt’s method. This research was conducted in Lintea Island which has a high variation of benthic cover using Sentinel-2A imagery. To assess the effectiveness of the proposed new approach for benthic habitat mapping two different classification procedures are implemented. The first procedure is the commonly applied method in benthic habitat mapping where DII image is used as input data to all coastal area for image classification process regardless of depth variation. The second procedure is the proposed new approach where its initial step begins with the separation of the study area into shallow and deep waters using the RWDI image. Shallow area was then classified using the sunglint-corrected image as input data and the deep area was classified using DII image as input data. The final classification maps of those two areas were merged as a single benthic habitat map. A confusion matrix was then applied to evaluate the mapping accuracy of the final map. The result shows that the new proposed mapping approach can be used to map all benthic objects in all depth ranges and shows a better accuracy compared to that of classification map produced using only with DII.
A physiology-based parametric imaging method for FDG-PET data
NASA Astrophysics Data System (ADS)
Scussolini, Mara; Garbarino, Sara; Sambuceti, Gianmario; Caviglia, Giacomo; Piana, Michele
2017-12-01
Parametric imaging is a compartmental approach that processes nuclear imaging data to estimate the spatial distribution of the kinetic parameters governing tracer flow. The present paper proposes a novel and efficient computational method for parametric imaging which is potentially applicable to several compartmental models of diverse complexity and which is effective in the determination of the parametric maps of all kinetic coefficients. We consider applications to [18 F]-fluorodeoxyglucose positron emission tomography (FDG-PET) data and analyze the two-compartment catenary model describing the standard FDG metabolization by an homogeneous tissue and the three-compartment non-catenary model representing the renal physiology. We show uniqueness theorems for both models. The proposed imaging method starts from the reconstructed FDG-PET images of tracer concentration and preliminarily applies image processing algorithms for noise reduction and image segmentation. The optimization procedure solves pixel-wise the non-linear inverse problem of determining the kinetic parameters from dynamic concentration data through a regularized Gauss-Newton iterative algorithm. The reliability of the method is validated against synthetic data, for the two-compartment system, and experimental real data of murine models, for the renal three-compartment system.
A portable high-definition electronic endoscope based on embedded system
NASA Astrophysics Data System (ADS)
Xu, Guang; Wang, Liqiang; Xu, Jin
2012-11-01
This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.
Digital-image processing and image analysis of glacier ice
Fitzpatrick, Joan J.
2013-01-01
This document provides a methodology for extracting grain statistics from 8-bit color and grayscale images of thin sections of glacier ice—a subset of physical properties measurements typically performed on ice cores. This type of analysis is most commonly used to characterize the evolution of ice-crystal size, shape, and intercrystalline spatial relations within a large body of ice sampled by deep ice-coring projects from which paleoclimate records will be developed. However, such information is equally useful for investigating the stress state and physical responses of ice to stresses within a glacier. The methods of analysis presented here go hand-in-hand with the analysis of ice fabrics (aggregate crystal orientations) and, when combined with fabric analysis, provide a powerful method for investigating the dynamic recrystallization and deformation behaviors of bodies of ice in motion. The procedures described in this document compose a step-by-step handbook for a specific image acquisition and data reduction system built in support of U.S. Geological Survey ice analysis projects, but the general methodology can be used with any combination of image processing and analysis software. The specific approaches in this document use the FoveaPro 4 plug-in toolset to Adobe Photoshop CS5 Extended but it can be carried out equally well, though somewhat less conveniently, with software such as the image processing toolbox in MATLAB, Image-Pro Plus, or ImageJ.
A robust real-time abnormal region detection framework from capsule endoscopy images
NASA Astrophysics Data System (ADS)
Cheng, Yanfen; Liu, Xu; Li, Huiping
2009-02-01
In this paper we present a novel method to detect abnormal regions from capsule endoscopy images. Wireless Capsule Endoscopy (WCE) is a recent technology where a capsule with an embedded camera is swallowed by the patient to visualize the gastrointestinal tract. One challenge is one procedure of diagnosis will send out over 50,000 images, making physicians' reviewing process expensive. Physicians' reviewing process involves in identifying images containing abnormal regions (tumor, bleeding, etc) from this large number of image sequence. In this paper we construct a novel framework for robust and real-time abnormal region detection from large amount of capsule endoscopy images. The detected potential abnormal regions can be labeled out automatically to let physicians review further, therefore, reduce the overall reviewing process. In this paper we construct an abnormal region detection framework with the following advantages: 1) Trainable. Users can define and label any type of abnormal region they want to find; The abnormal regions, such as tumor, bleeding, etc., can be pre-defined and labeled using the graphical user interface tool we provided. 2) Efficient. Due to the large number of image data, the detection speed is very important. Our system can detect very efficiently at different scales due to the integral image features we used; 3) Robust. After feature selection we use a cascade of classifiers to further enforce the detection accuracy.
Management of natural resources through automatic cartographic inventory
NASA Technical Reports Server (NTRS)
Rey, P.; Gourinard, Y.; Cambou, F. (Principal Investigator)
1973-01-01
The author has identified the following significant results. Significant results of the ARNICA program from August 1972 - January 1973 have been: (1) establishment of image to object correspondence codes for all types of soil use and forestry in northern Spain; (2) establishment of a transfer procedure between qualitative (remote identification and remote interpretation) and quantitative (numerization, storage, automatic statistical cartography) use of images; (3) organization of microdensitometric data processing and automatic cartography software; and (4) development of a system for measuring reflectance simultaneous with imagery.
NASA Technical Reports Server (NTRS)
Gilbert, Percy; Jones, Robert E.; Kramarchuk, Ihor; Williams, Wallace D.; Pouch, John J.
1987-01-01
Using a recently developed technology called thermal-wave microscopy, NASA Lewis Research Center has developed a computer controlled submicron thermal-wave microscope for the purpose of investigating III-V compound semiconductor devices and materials. This paper describes the system's design and configuration and discusses the hardware and software capabilities. Knowledge of the Concurrent 3200 series computers is needed for a complete understanding of the material presented. However, concepts and procedures are of general interest.
Imaging immune response of skin mast cells in vivo with two-photon microscopy
NASA Astrophysics Data System (ADS)
Li, Chunqiang; Pastila, Riikka K.; Lin, Charles P.
2012-02-01
Intravital multiphoton microscopy has provided insightful information of the dynamic process of immune cells in vivo. However, the use of exogenous labeling agents limits its applications. There is no method to perform functional imaging of mast cells, a population of innate tissue-resident immune cells. Mast cells are widely recognized as the effector cells in allergy. Recently their roles as immunoregulatory cells in certain innate and adaptive immune responses are being actively investigated. Here we report in vivo mouse skin mast cells imaging with two-photon microscopy using endogenous tryptophan as the fluorophore. We studied the following processes. 1) Mast cells degranulation, the first step in the mast cell activation process in which the granules are released into peripheral tissue to trigger downstream reactions. 2) Mast cell reconstitution, a procedure commonly used to study mast cells functioning by comparing the data from wild type mice, mast cell-deficient mice, and mast-cell deficient mice reconstituted with bone marrow-derived mast cells (BMMCs). Imaging the BMMCs engraftment in tissue reveals the mast cells development and the efficiency of BMMCs reconstitution. We observed the reconstitution process for 6 weeks in the ear skin of mast cell-deficient Kit wsh/ w-sh mice by two-photon imaging. Our finding is the first instance of imaging mast cells in vivo with endogenous contrast.
Automatic brain MR image denoising based on texture feature-based artificial neural networks.
Chang, Yu-Ning; Chang, Herng-Hua
2015-01-01
Noise is one of the main sources of quality deterioration not only for visual inspection but also in computerized processing in brain magnetic resonance (MR) image analysis such as tissue classification, segmentation and registration. Accordingly, noise removal in brain MR images is important for a wide variety of subsequent processing applications. However, most existing denoising algorithms require laborious tuning of parameters that are often sensitive to specific image features and textures. Automation of these parameters through artificial intelligence techniques will be highly beneficial. In the present study, an artificial neural network associated with image texture feature analysis is proposed to establish a predictable parameter model and automate the denoising procedure. In the proposed approach, a total of 83 image attributes were extracted based on four categories: 1) Basic image statistics. 2) Gray-level co-occurrence matrix (GLCM). 3) Gray-level run-length matrix (GLRLM) and 4) Tamura texture features. To obtain the ranking of discrimination in these texture features, a paired-samples t-test was applied to each individual image feature computed in every image. Subsequently, the sequential forward selection (SFS) method was used to select the best texture features according to the ranking of discrimination. The selected optimal features were further incorporated into a back propagation neural network to establish a predictable parameter model. A wide variety of MR images with various scenarios were adopted to evaluate the performance of the proposed framework. Experimental results indicated that this new automation system accurately predicted the bilateral filtering parameters and effectively removed the noise in a number of MR images. Comparing to the manually tuned filtering process, our approach not only produced better denoised results but also saved significant processing time.
NASA Astrophysics Data System (ADS)
Vho, Alice; Bistacchi, Andrea
2015-04-01
A quantitative analysis of fault-rock distribution is of paramount importance for studies of fault zone architecture, fault and earthquake mechanics, and fluid circulation along faults at depth. Here we present a semi-automatic workflow for fault-rock mapping on a Digital Outcrop Model (DOM). This workflow has been developed on a real case of study: the strike-slip Gole Larghe Fault Zone (GLFZ). It consists of a fault zone exhumed from ca. 10 km depth, hosted in granitoid rocks of Adamello batholith (Italian Southern Alps). Individual seismogenic slip surfaces generally show green cataclasites (cemented by the precipitation of epidote and K-feldspar from hydrothermal fluids) and more or less well preserved pseudotachylytes (black when well preserved, greenish to white when altered). First of all, a digital model for the outcrop is reconstructed with photogrammetric techniques, using a large number of high resolution digital photographs, processed with VisualSFM software. By using high resolution photographs the DOM can have a much higher resolution than with LIDAR surveys, up to 0.2 mm/pixel. Then, image processing is performed to map the fault-rock distribution with the ImageJ-Fiji package. Green cataclasites and epidote/K-feldspar veins can be quite easily separated from the host rock (tonalite) using spectral analysis. Particularly, band ratio and principal component analysis have been tested successfully. The mapping of black pseudotachylyte veins is more tricky because the differences between the pseudotachylyte and biotite spectral signature are not appreciable. For this reason we have tested different morphological processing tools aimed at identifying (and subtracting) the tiny biotite grains. We propose a solution based on binary images involving a combination of size and circularity thresholds. Comparing the results with manually segmented images, we noticed that major problems occur only when pseudotachylyte veins are very thin and discontinuous. After having tested and refined the image analysis processing for some typical images, we have recorded a macro with ImageJ-Fiji allowing to process all the images for a given DOM. As a result, the three different types of rocks can be semi-automatically mapped on large DOMs using a simple and efficient procedure. This allows to develop quantitative analyses of fault rock distribution and thickness, fault trace roughness/curvature and length, fault zone architecture, and alteration halos due to hydrothermal fluid-rock interaction. To improve our workflow, additional or different morphological operators could be integrated in our procedure to yield a better resolution on small and thin pseudotachylyte veins (e.g. perimeter/area ratio).
Development of fluorescence based handheld imaging devices for food safety inspection
NASA Astrophysics Data System (ADS)
Lee, Hoyoung; Kim, Moon S.; Chao, Kuanglin; Lefcourt, Alan M.; Chan, Diane E.
2013-05-01
For sanitation inspection in food processing environment, fluorescence imaging can be a very useful method because many organic materials reveal unique fluorescence emissions when excited by UV or violet radiation. Although some fluorescence-based automated inspection instrumentation has been developed for food products, there remains a need for devices that can assist on-site inspectors performing visual sanitation inspection of the surfaces of food processing/handling equipment. This paper reports the development of an inexpensive handheld imaging device designed to visualize fluorescence emissions and intended to help detect the presence of fecal contaminants, organic residues, and bacterial biofilms at multispectral fluorescence emission bands. The device consists of a miniature camera, multispectral (interference) filters, and high power LED illumination. With WiFi communication, live inspection images from the device can be displayed on smartphone or tablet devices. This imaging device could be a useful tool for assessing the effectiveness of sanitation procedures and for helping processors to minimize food safety risks or determine potential problem areas. This paper presents the design and development including evaluation and optimization of the hardware components of the imaging devices.
Geometric error characterization and error budgets. [thematic mapper
NASA Technical Reports Server (NTRS)
Beyer, E.
1982-01-01
Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.
Deblurring adaptive optics retinal images using deep convolutional neural networks.
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-12-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved.
Deblurring adaptive optics retinal images using deep convolutional neural networks
Fei, Xiao; Zhao, Junlei; Zhao, Haoxin; Yun, Dai; Zhang, Yudong
2017-01-01
The adaptive optics (AO) can be used to compensate for ocular aberrations to achieve near diffraction limited high-resolution retinal images. However, many factors such as the limited aberration measurement and correction accuracy with AO, intraocular scatter, imaging noise and so on will degrade the quality of retinal images. Image post processing is an indispensable and economical method to make up for the limitation of AO retinal imaging procedure. In this paper, we proposed a deep learning method to restore the degraded retinal images for the first time. The method directly learned an end-to-end mapping between the blurred and restored retinal images. The mapping was represented as a deep convolutional neural network that was trained to output high-quality images directly from blurry inputs without any preprocessing. This network was validated on synthetically generated retinal images as well as real AO retinal images. The assessment of the restored retinal images demonstrated that the image quality had been significantly improved. PMID:29296496
NASA Astrophysics Data System (ADS)
Ma, Xu; Li, Yanqiu; Guo, Xuejia; Dong, Lisong
2012-03-01
Optical proximity correction (OPC) and phase shifting mask (PSM) are the most widely used resolution enhancement techniques (RET) in the semiconductor industry. Recently, a set of OPC and PSM optimization algorithms have been developed to solve for the inverse lithography problem, which are only designed for the nominal imaging parameters without giving sufficient attention to the process variations due to the aberrations, defocus and dose variation. However, the effects of process variations existing in the practical optical lithography systems become more pronounced as the critical dimension (CD) continuously shrinks. On the other hand, the lithography systems with larger NA (NA>0.6) are now extensively used, rendering the scalar imaging models inadequate to describe the vector nature of the electromagnetic field in the current optical lithography systems. In order to tackle the above problems, this paper focuses on developing robust gradient-based OPC and PSM optimization algorithms to the process variations under a vector imaging model. To achieve this goal, an integrative and analytic vector imaging model is applied to formulate the optimization problem, where the effects of process variations are explicitly incorporated in the optimization framework. The steepest descent algorithm is used to optimize the mask iteratively. In order to improve the efficiency of the proposed algorithms, a set of algorithm acceleration techniques (AAT) are exploited during the optimization procedure.
Place of modern imaging in brachytherapy planning.
Hellebust, T P
2018-06-01
Imaging has probably been the most important driving force for the development of brachytherapy treatments the last 20 years. Due to implementation of three-dimensional imaging, brachytherapy is nowadays a highly accurate and reliable treatment option for many cancer patients. To be able to optimize the dose distribution in brachytherapy the anatomy and the applicator(s) or sources should be correctly localised in the images. For computed tomography (CT) the later criteria is easily fulfilled for most brachytherapy sites. However, for many sites, like cervix and prostate, CT is not optimal for delineation since soft tissue is not adequately visualized and the tumor is not well discriminated. For cervical cancer treatment planning based on magnetic resonance imaging (MRI) is recommended. Some centres also use MRI for postimplant dosimetry of permanent prostate seed implant and high dose rate prostate brachytherapy. Moreover, in so called focal brachytherapy where only a part of the prostate is treated, multiparametric MRI is an excellent tool that can assist in defining the target volume. Applicator or source localization is challenging using MRI, but tolls exist to assist this process. Also, geometrical distortions should be corrected or accounted for. Transrectal ultrasound is considered to be the gold standard for high dose rate prostate brachytherapy and transrectal ultrasound -based brachytherapy procedure offers a method for interactive treatment planning. Reconstruction of the needles is sometimes challenging, especially to identify the needle tip. The accuracy of the reconstruction could be improved by measuring the residuals needle length and by using a bi-planar transducer. The last decade several groups worldwide have explored the use of transrectal and transabdominal ultrasound for cervical cancer brachytherapy. Since ultrasonography is widely available, offers fast image acquisition and is a rather inexpensive modality such development is interesting. However, more work is needed to establish this as an adequate alternative for all phases of the treatment planning process. Studies using positron emission tomography imaging in combination with brachytherapy treatment planning are limited. However, development of new tracers may offer new treatment approaches for brachytherapy in the future. Combination of several image modalities will be the optimal solution in many situations, either during the same session or for different fractions. When several image modalities are combined so called image registration procedures are used and it is important to understand the principles and limitations of such procedures. Copyright © 2018 Société française de radiothérapie oncologique (SFRO). Published by Elsevier Masson SAS. All rights reserved.
Platform for intraoperative analysis of video streams
NASA Astrophysics Data System (ADS)
Clements, Logan; Galloway, Robert L., Jr.
2004-05-01
Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.
NASA Astrophysics Data System (ADS)
Tesařová, M.; Zikmund, T.; Kaucká, M.; Adameyko, I.; Jaroš, J.; Paloušek, D.; Škaroupka, D.; Kaiser, J.
2016-03-01
Imaging of increasingly complex cartilage in vertebrate embryos is one of the key tasks of developmental biology. This is especially important to study shape-organizing processes during initial skeletal formation and growth. Advanced imaging techniques that are reflecting biological needs give a powerful impulse to push the boundaries of biological visualization. Recently, techniques for contrasting tissues and organs have improved considerably, extending traditional 2D imaging approaches to 3D . X-ray micro computed tomography (μCT), which allows 3D imaging of biological objects including their internal structures with a resolution in the micrometer range, in combination with contrasting techniques seems to be the most suitable approach for non-destructive imaging of embryonic developing cartilage. Despite there are many software-based ways for visualization of 3D data sets, having a real solid model of the studied object might give novel opportunities to fully understand the shape-organizing processes in the developing body. In this feasibility study we demonstrated the full procedure of creating a real 3D object of mouse embryo nasal capsule, i.e. the staining, the μCT scanning combined by the advanced data processing and the 3D printing.
Smartphone snapshot mapping of skin chromophores under triple-wavelength laser illumination.
Spigulis, Janis; Oshina, Ilze; Berzina, Anna; Bykov, Alexander
2017-09-01
Chromophore distribution maps are useful tools for skin malformation severity assessment and for monitoring of skin recovery after burns, surgeries, and other interactions. The chromophore maps can be obtained by processing several spectral images of skin, e.g., captured by hyperspectral or multispectral cameras during seconds or even minutes. To avoid motion artifacts and simplify the procedure, a single-snapshot technique for mapping melanin, oxyhemoglobin, and deoxyhemoglobin of in-vivo skin by a smartphone under simultaneous three-wavelength (448–532–659 nm) laser illumination is proposed and examined. Three monochromatic spectral images related to the illumination wavelengths were extracted from the smartphone camera RGB image data set with respect to crosstalk between the RGB detection bands. Spectral images were further processed accordingly to Beer’s law in a three chromophore approximation. Photon absorption path lengths in skin at the exploited wavelengths were estimated by means of Monte Carlo simulations. The technique was validated clinically on three kinds of skin lesions: nevi, hemangiomas, and seborrheic keratosis. Design of the developed add-on laser illumination system, image-processing details, and the results of clinical measurements are presented and discussed.
MR imaging guidance for minimally invasive procedures
NASA Astrophysics Data System (ADS)
Wong, Terence Z.; Kettenbach, Joachim; Silverman, Stuart G.; Schwartz, Richard B.; Morrison, Paul R.; Kacher, Daniel F.; Jolesz, Ferenc A.
1998-04-01
Image guidance is one of the major challenges common to all minimally invasive procedures including biopsy, thermal ablation, endoscopy, and laparoscopy. This is essential for (1) identifying the target lesion, (2) planning the minimally invasive approach, and (3) monitoring the therapy as it progresses. MRI is an ideal imaging modality for this purpose, providing high soft tissue contrast and multiplanar imaging, capability with no ionizing radiation. An interventional/surgical MRI suite has been developed at Brigham and Women's Hospital which provides multiplanar imaging guidance during surgery, biopsy, and thermal ablation procedures. The 0.5T MRI system (General Electric Signa SP) features open vertical access, allowing intraoperative imaging to be performed. An integrated navigational system permits near real-time control of imaging planes, and provides interactive guidance for positioning various diagnostic and therapeutic probes. MR imaging can also be used to monitor cryotherapy as well as high temperature thermal ablation procedures sing RF, laser, microwave, or focused ultrasound. Design features of the interventional MRI system will be discussed, and techniques will be described for interactive image acquisition and tracking of interventional instruments. Applications for interactive and near-real-time imaging will be presented as well as examples of specific procedures performed using MRI guidance.
In-Flight Flow Visualization Using Infrared Thermography
NASA Technical Reports Server (NTRS)
vanDam, C. P.; Shiu, H. J.; Banks D. W.
1997-01-01
The feasibility of remote infrared thermography of aircraft surfaces during flight to visualize the extent of laminar flow on a target aircraft has been examined. In general, it was determined that such thermograms can be taken successfully using an existing airplane/thermography system (NASA Dryden's F-18 with infrared imaging pod) and that the transition pattern and, thus, the extent of laminar flow can be extracted from these thermograms. Depending on the in-flight distance between the F-18 and the target aircraft, the thermograms can have a spatial resolution of as little as 0.1 inches. The field of view provided by the present remote system is superior to that of prior stationary infrared thermography systems mounted in the fuselage or vertical tail of a subject aircraft. An additional advantage of the present experimental technique is that the target aircraft requires no or minimal modifications. An image processing procedure was developed which improves the signal-to-noise ratio of the thermograms. Problems encountered during the analog recording of the thermograms (banding of video images) made it impossible to evaluate the adequacy of the present imaging system and image processing procedure to detect transition on untreated metal surfaces. The high reflectance, high thermal difussivity, and low emittance of metal surfaces tend to degrade the images to an extent that it is very difficult to extract transition information from them. The application of a thin (0.005 inches) self-adhesive insulating film to the surface is shown to solve this problem satisfactorily. In addition to the problem of infrared based transition detection on untreated metal surfaces, future flight tests will also concentrate on the visualization of other flow phenomena such as flow separation and reattachment.
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
Teoh, Raymond; Johnson, Raleigh F; Nishino, Thomas K; Ethridge, Richard T
2007-01-01
The deep inferior epigastric perforator flap procedure has become a popular alternative for women who require breast reconstruction. One of the difficulties with this procedure is identifying perforator arteries large enough to ensure that the harvested tissue is well vascularized. Current techniques involve imaging the perforator arteries with computed tomography (CT) to produce a grid mapping the locations of the perforator arteries relative to the umbilicus. To compare the time it takes to produce a map of the perforators using either two-dimensional (2D) or three-dimensional (3D) CT, and to see whether there is a benefit in using a 3D model. Patient CT abdomen and pelvis scans were acquired from a GE 64-slice scanner. CT image processing was performed with the GE 3D Advantage Workstation v4.2 software. Maps of the perforators were generated both as 2D and 3D representations. Perforators within a region 5 cm rostral and 7 cm caudal to the umbilicus were measured and the times to perform these measurements using both 2D and 3D images were recorded by a stopwatch. Although the 3D method took longer than the 2D method (mean [+/- SD] time 1:51+/-0:35 min versus 1:08+/-0:16 min per perforator artery, respectively), producing a 3D image provides much more information than the 2D images alone. Additionally, an actual-sized 3D image can be printed out, removing the need to make measurements and producing a grid. Although it took less time to create a grid of the perforators using 2D axial CT scans, the 3D reconstruction of the abdomen allows the plastic surgeons to better visualize the patient's anatomy and has definite clinical utility.
NASA Astrophysics Data System (ADS)
Shin, Kwangsoo; Choi, Jin Woo; Ko, Giho; Baik, Seungmin; Kim, Dokyoon; Park, Ok Kyu; Lee, Kyoungbun; Cho, Hye Rim; Han, Sang Ihn; Lee, Soo Hong; Lee, Dong Jun; Lee, Nohyun; Kim, Hyo-Cheol; Hyeon, Taeghwan
2017-07-01
Tissue adhesives have emerged as an alternative to sutures and staples for wound closure and reconnection of injured tissues after surgery or trauma. Owing to their convenience and effectiveness, these adhesives have received growing attention particularly in minimally invasive procedures. For safe and accurate applications, tissue adhesives should be detectable via clinical imaging modalities and be highly biocompatible for intracorporeal procedures. However, few adhesives meet all these requirements. Herein, we show that biocompatible tantalum oxide/silica core/shell nanoparticles (TSNs) exhibit not only high contrast effects for real-time imaging but also strong adhesive properties. Furthermore, the biocompatible TSNs cause much less cellular toxicity and less inflammation than a clinically used, imageable tissue adhesive (that is, a mixture of cyanoacrylate and Lipiodol). Because of their multifunctional imaging and adhesive property, the TSNs are successfully applied as a hemostatic adhesive for minimally invasive procedures and as an immobilized marker for image-guided procedures.
Image-guided interventional procedures in the dog and cat.
Vignoli, Massimo; Saunders, Jimmy H
2011-03-01
Medical imaging is essential for the diagnostic workup of many soft tissue and bone lesions in dogs and cats, but imaging modalities do not always allow the clinician to differentiate inflammatory or infectious conditions from neoplastic disorders. This review describes interventional procedures in dogs and cats for collection of samples for cytological or histopathological examinations under imaging guidance. It describes the indications and procedures for imaging-guided sampling, including ultrasound (US), computed tomography (CT), magnetic resonance imaging and fluoroscopy. US and CT are currently the modalities of choice in interventional imaging. Copyright © 2009 Elsevier Ltd. All rights reserved.
Video stereo-laparoscopy system
NASA Astrophysics Data System (ADS)
Xiang, Yang; Hu, Jiasheng; Jiang, Huilin
2006-01-01
Minimally invasive surgery (MIS) has contributed significantly to patient care by reducing the morbidity associated with more invasive procedures. MIS procedures have become standard treatment for gallbladder disease and some abdominal malignancies. The imaging system has played a major role in the evolving field of minimally invasive surgery (MIS). The image need to have good resolution, large magnification, especially, the image need to have depth cue at the same time the image have no flicker and suit brightness. The video stereo-laparoscopy system can meet the demand of the doctors. This paper introduces the 3d video laparoscopy has those characteristic, field frequency: 100Hz, the depth space: 150mm, resolution: 10pl/mm. The work principle of the system is introduced in detail, and the optical system and time-division stereo-display system are described briefly in this paper. The system has focusing image lens, it can image on the CCD chip, the optical signal can change the video signal, and through A/D switch of the image processing system become the digital signal, then display the polarized image on the screen of the monitor through the liquid crystal shutters. The doctors with the polarized glasses can watch the 3D image without flicker of the tissue or organ. The 3D video laparoscope system has apply in the MIS field and praised by doctors. Contrast to the traditional 2D video laparoscopy system, it has some merit such as reducing the time of surgery, reducing the problem of surgery and the trained time.
Automatic detection of blurred images in UAV image sets
NASA Astrophysics Data System (ADS)
Sieberth, Till; Wackrow, Rene; Chandler, Jim H.
2016-12-01
Unmanned aerial vehicles (UAV) have become an interesting and active research topic for photogrammetry. Current research is based on images acquired by an UAV, which have a high ground resolution and good spectral and radiometrical resolution, due to the low flight altitudes combined with a high resolution camera. UAV image flights are also cost effective and have become attractive for many applications including, change detection in small scale areas. One of the main problems preventing full automation of data processing of UAV imagery is the degradation effect of blur caused by camera movement during image acquisition. This can be caused by the normal flight movement of the UAV as well as strong winds, turbulence or sudden operator inputs. This blur disturbs the visual analysis and interpretation of the data, causes errors and can degrade the accuracy in automatic photogrammetric processing algorithms. The detection and removal of these images is currently achieved manually, which is both time consuming and prone to error, particularly for large image-sets. To increase the quality of data processing an automated process is necessary, which must be both reliable and quick. This paper describes the development of an automatic filtering process, which is based upon the quantification of blur in an image. Images with known blur are processed digitally to determine a quantifiable measure of image blur. The algorithm is required to process UAV images fast and reliably to relieve the operator from detecting blurred images manually. The newly developed method makes it possible to detect blur caused by linear camera displacement and is based on human detection of blur. Humans detect blurred images best by comparing it to other images in order to establish whether an image is blurred or not. The developed algorithm simulates this procedure by creating an image for comparison using image processing. Creating internally a comparable image makes the method independent of additional images. However, the calculated blur value named SIEDS (saturation image edge difference standard-deviation) on its own does not provide an absolute number to judge if an image is blurred or not. To achieve a reliable judgement of image sharpness the SIEDS value has to be compared to other SIEDS values from the same dataset. The speed and reliability of the method was tested using a range of different UAV datasets. Two datasets will be presented in this paper to demonstrate the effectiveness of the algorithm. The algorithm proves to be fast and the returned values are optically correct, making the algorithm applicable for UAV datasets. Additionally, a close range dataset was processed to determine whether the method is also useful for close range applications. The results show that the method is also reliable for close range images, which significantly extends the field of application for the algorithm.
Pattern Recognition in Optical Remote Sensing Data Processing
NASA Astrophysics Data System (ADS)
Kozoderov, Vladimir; Kondranin, Timofei; Dmitriev, Egor; Kamentsev, Vladimir
Computational procedures of the land surface biophysical parameters retrieval imply that modeling techniques are available of the outgoing radiation description together with monitoring techniques of remote sensing data processing using registered radiances between the related optical sensors and the land surface objects called “patterns”. Pattern recognition techniques are a valuable approach to the processing of remote sensing data for images of the land surface - atmosphere system. Many simplified codes of the direct and inverse problems of atmospheric optics are considered applicable for the imagery processing of low and middle spatial resolution. Unless the authors are not interested in the accuracy of the final information products, they utilize these standard procedures. The emerging necessity of processing data of high spectral and spatial resolution given by imaging spectrometers puts forward the newly defined pattern recognition techniques. The proposed tools of using different types of classifiers combined with the parameter retrieval procedures for the forested environment are maintained to have much wider applications as compared with the image features and object shapes extraction, which relates to photometry and geometry in pixel-level reflectance representation of the forested land cover. The pixel fraction and reflectance of “end-members” (sunlit forest canopy, sunlit background and shaded background for a particular view and solar illumination angle) are only a part in the listed techniques. It is assumed that each pixel views collections of the individual forest trees and the pixel-level reflectance can thus be computed as a linear mixture of sunlit tree tops, sunlit background (or understory) and shadows. Instead of these photometry and geometry constraints, the improved models are developed of the functional description of outgoing spectral radiation, in which such parameters of the forest canopy like the vegetation biomass density for particular forest species and age are embedded. This permits us to calculate the relationships between the registered radiances and the biomass densities (the direct problem of atmospheric optics). The next stage is to find solutions of this problem as cross-sections of the related curves in the multi-dimensional space given by the parameters of these models (the inverse problem). The typical solutions may not be mathematically unique and the computational procedure is undertaken to their regularization by finding minima of the functional called “the energy for the particular class of forests”. The relevant optimization procedures serve to identify the likelihood between any registered set of data and the theoretical distributions as well as to regularize the solution by employing the derivative functions characterizing the neighborhood of the pixels for the related classes. As a result, we have elaborated a rigorous approach to optimize spectral channels based on searching their most informative sets by combining the channels and finding correlations between them. A successive addition method is used with the calculation of the total probability error. The step up method consists in fixing the level of the probability error that is not improved by further adding the channels in the calculation scheme of the pattern recognition. The best distinguishable classes are recognized at the first stage of this procedure. The analytical technique called “cross-validation” is used at its second stage. This procedure is in removing some data before the classifier training begins employing, for instance, the known “leaving-out-one” strategy. This strategy serves to explain the accuracy category additionally to the standard confusion matrix between the modeling approach and the available ground-based observations, once the employed validation map may not be perfect or needs renewal. Such cross-validation carried out for ensembles of airborne data from the imaging spectrometer produced in Russia enables to conclude that the forest classes on a test area are separated with high accuracy. The proposed approach is recommended to account for the needed set of ground-based measurements during field campaigns for the validation purposes of remote sensing data processing and for the retrieval procedures of such parameters of forests like Net Primary Productivity with an ensured accuracy that results from the described here computational procedures.
The ASTRODEEP Frontier Fields catalogues. I. Multiwavelength photometry of Abell-2744 and MACS-J0416
NASA Astrophysics Data System (ADS)
Merlin, E.; Amorín, R.; Castellano, M.; Fontana, A.; Buitrago, F.; Dunlop, J. S.; Elbaz, D.; Boucaud, A.; Bourne, N.; Boutsia, K.; Brammer, G.; Bruce, V. A.; Capak, P.; Cappelluti, N.; Ciesla, L.; Comastri, A.; Cullen, F.; Derriere, S.; Faber, S. M.; Ferguson, H. C.; Giallongo, E.; Grazian, A.; Lotz, J.; Michałowski, M. J.; Paris, D.; Pentericci, L.; Pilo, S.; Santini, P.; Schreiber, C.; Shu, X.; Wang, T.
2016-05-01
Context. The Frontier Fields survey is a pioneering observational program aimed at collecting photometric data, both from space (Hubble Space Telescope and Spitzer Space Telescope) and from ground-based facilities (VLT Hawk-I), for six deep fields pointing at clusters of galaxies and six nearby deep parallel fields, in a wide range of passbands. The analysis of these data is a natural outcome of the Astrodeep project, an EU collaboration aimed at developing methods and tools for extragalactic photometry and creating valuable public photometric catalogues. Aims: We produce multiwavelength photometric catalogues (from B to 4.5 μm) for the first two of the Frontier Fields, Abell-2744 and MACS-J0416 (plus their parallel fields). Methods: To detect faint sources even in the central regions of the clusters, we develop a robust and repeatable procedure that uses the public codes Galapagos and Galfit to model and remove most of the light contribution from both the brightest cluster members, and the intra-cluster light. We perform the detection on the processed HST H160 image to obtain a pure H-selected sample, which is the primary catalogue that we publish. We also add a sample of sources which are undetected in the H160 image but appear on a stacked infrared image. Photometry on the other HST bands is obtained using SExtractor, again on processed images after the procedure for foreground light removal. Photometry on the Hawk-I and IRAC bands is obtained using our PSF-matching deconfusion code t-phot. A similar procedure, but without the need for the foreground light removal, is adopted for the Parallel fields. Results: The procedure of foreground light subtraction allows for the detection and the photometric measurements of ~2500 sources per field. We deliver and release complete photometric H-detected catalogues, with the addition of the complementary sample of infrared-detected sources. All objects have multiwavelength coverage including B to H HST bands, plus K-band from Hawk-I, and 3.6-4.5 μm from Spitzer. full and detailed treatment of photometric errors is included. We perform basic sanity checks on the reliability of our results. Conclusions: The multiwavelength photometric catalogues are available publicly and are ready to be used for scientific purposes. Our procedures allows for the detection of outshone objects near the bright galaxies, which, coupled with the magnification effect of the clusters, can reveal extremely faint high redshift sources. Full analysis on photometric redshifts is presented in Paper II. The catalogues, together with the final processed images for all HST bands (as well as some diagnostic data and images), are publicly available and can be downloaded from the Astrodeep website at http://www.astrodeep.eu/frontier-fields/ and from a dedicated CDS webpage (http://astrodeep.u-strasbg.fr/ff/index.html). The catalogues are also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/590/A31
DOE Office of Scientific and Technical Information (OSTI.GOV)
Perrine, Kenneth A.; Hopkins, Derek F.; Lamarche, Brian L.
2005-09-01
Biologists and computer engineers at Pacific Northwest National Laboratory have specified, designed, and implemented a hardware/software system for performing real-time, multispectral image processing on a confocal microscope. This solution is intended to extend the capabilities of the microscope, enabling scientists to conduct advanced experiments on cell signaling and other kinds of protein interactions. FRET (fluorescence resonance energy transfer) techniques are used to locate and monitor protein activity. In FRET, it is critical that spectral images be precisely aligned with each other despite disturbances in the physical imaging path caused by imperfections in lenses and cameras, and expansion and contraction ofmore » materials due to temperature changes. The central importance of this work is therefore automatic image registration. This runs in a framework that guarantees real-time performance (processing pairs of 1024x1024, 8-bit images at 15 frames per second) and enables the addition of other types of advanced image processing algorithms such as image feature characterization. The supporting system architecture consists of a Visual Basic front-end containing a series of on-screen interfaces for controlling various aspects of the microscope and a script engine for automation. One of the controls is an ActiveX component written in C++ for handling the control and transfer of images. This component interfaces with a pair of LVDS image capture boards and a PCI board containing a 6-million gate Xilinx Virtex-II FPGA. Several types of image processing are performed on the FPGA in a pipelined fashion, including the image registration. The FPGA offloads work that would otherwise need to be performed by the main CPU and has a guaranteed real-time throughput. Image registration is performed in the FPGA by applying a cubic warp on one image to precisely align it with the other image. Before each experiment, an automated calibration procedure is run in order to set up the cubic warp. During image acquisitions, the cubic warp is evaluated by way of forward differencing. Unwanted pixelation artifacts are minimized by bilinear sampling. The resulting system is state-of-the-art for biological imaging. Precisely registered images enable the reliable use of FRET techniques. In addition, real-time image processing performance allows computed images to be fed back and displayed to scientists immediately, and the pipelined nature of the FPGA allows additional image processing algorithms to be incorporated into the system without slowing throughput.« less
NASA Astrophysics Data System (ADS)
Xue, Xinwei; Cheryauka, Arvi; Tubbs, David
2006-03-01
CT imaging in interventional and minimally-invasive surgery requires high-performance computing solutions that meet operational room demands, healthcare business requirements, and the constraints of a mobile C-arm system. The computational requirements of clinical procedures using CT-like data are increasing rapidly, mainly due to the need for rapid access to medical imagery during critical surgical procedures. The highly parallel nature of Radon transform and CT algorithms enables embedded computing solutions utilizing a parallel processing architecture to realize a significant gain of computational intensity with comparable hardware and program coding/testing expenses. In this paper, using a sample 2D and 3D CT problem, we explore the programming challenges and the potential benefits of embedded computing using commodity hardware components. The accuracy and performance results obtained on three computational platforms: a single CPU, a single GPU, and a solution based on FPGA technology have been analyzed. We have shown that hardware-accelerated CT image reconstruction can be achieved with similar levels of noise and clarity of feature when compared to program execution on a CPU, but gaining a performance increase at one or more orders of magnitude faster. 3D cone-beam or helical CT reconstruction and a variety of volumetric image processing applications will benefit from similar accelerations.
Schwein, Adeline; Chinnadurai, Ponraj; Shah, Dipan J; Lumsden, Alan B; Bechara, Carlos F; Bismuth, Jean
2017-05-01
Three-dimensional image fusion of preoperative computed tomography (CT) angiography with fluoroscopy using intraoperative noncontrast cone-beam CT (CBCT) has been shown to improve endovascular procedures by reducing procedure length, radiation dose, and contrast media volume. However, patients with a contraindication to CT angiography (renal insufficiency, iodinated contrast allergy) may not benefit from this image fusion technique. The primary objective of this study was to evaluate the feasibility of magnetic resonance angiography (MRA) and fluoroscopy image fusion using noncontrast CBCT as a guidance tool during complex endovascular aortic procedures, especially in patients with renal insufficiency. All endovascular aortic procedures done under MRA image fusion guidance at a single-center were retrospectively reviewed. The patients had moderate to severe renal insufficiency and underwent diagnostic contrast-enhanced magnetic resonance imaging after gadolinium or ferumoxytol injection. Relevant vascular landmarks electronically marked in MRA images were overlaid on real-time two-dimensional fluoroscopy for image guidance, after image fusion with noncontrast intraoperative CBCT. Technical success, time for image registration, procedure time, fluoroscopy time, number of digital subtraction angiography (DSA) acquisitions before stent deployment or vessel catheterization, and renal function before and after the procedure were recorded. The image fusion accuracy was qualitatively evaluated on a binary scale by three physicians after review of image data showing virtual landmarks from MRA on fluoroscopy. Between November 2012 and March 2016, 10 patients underwent endovascular procedures for aortoiliac aneurysmal disease or aortic dissection using MRA image fusion guidance. All procedures were technically successful. A paired t-test analysis showed no difference between preimaging and postoperative renal function (P = .6). The mean time required for MRA-CBCT image fusion was 4:09 ± 01:31 min:sec. Total fluoroscopy time was 20.1 ± 6.9 minutes. Five of 10 patients (50%) underwent stent graft deployment without any predeployment DSA acquisition. Three of six vessels (50%) were cannulated under image fusion guidance without any precannulation DSA runs, and the remaining vessels were cannulated after one planning DSA acquisition. Qualitative evaluation showed 14 of 22 virtual landmarks (63.6%) from MRA overlaid on fluoroscopy were completely accurate, without the need for adjustment. Five of eight incorrect virtual landmarks (iliac and visceral arteries) resulted from vessel deformation caused by endovascular devices. Ferumoxytol or gadolinium-enhanced MRA imaging and image fusion with fluoroscopy using noncontrast CBCT is feasible and allows patients with renal insufficiency to benefit from optimal guidance during complex endovascular aortic procedures, while preserving their residual renal function. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Corner-point criterion for assessing nonlinear image processing imagers
NASA Astrophysics Data System (ADS)
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.
Ristivojević, Petar; Trifković, Jelena; Vovk, Irena; Milojković-Opsenica, Dušanka
2017-01-01
Considering the introduction of phytochemical fingerprint analysis, as a method of screening the complex natural products for the presence of most bioactive compounds, use of chemometric classification methods, application of powerful scanning and image capturing and processing devices and algorithms, advancement in development of novel stationary phases as well as various separation modalities, high-performance thin-layer chromatography (HPTLC) fingerprinting is becoming attractive and fruitful field of separation science. Multivariate image analysis is crucial in the light of proper data acquisition. In a current study, different image processing procedures were studied and compared in detail on the example of HPTLC chromatograms of plant resins. In that sense, obtained variables such as gray intensities of pixels along the solvent front, peak area and mean values of peak were used as input data and compared to obtained best classification models. Important steps in image analysis, baseline removal, denoising, target peak alignment and normalization were pointed out. Numerical data set based on mean value of selected bands and intensities of pixels along the solvent front proved to be the most convenient for planar-chromatographic profiling, although required at least the basic knowledge on image processing methodology, and could be proposed for further investigation in HPLTC fingerprinting. Copyright © 2016 Elsevier B.V. All rights reserved.
Digital image processing techniques for the analysis of fuel sprays global pattern
NASA Astrophysics Data System (ADS)
Zakaria, Rami; Bryanston-Cross, Peter; Timmerman, Brenda
2017-12-01
We studied the fuel atomization process of two fuel injectors to be fitted in a new small rotary engine design. The aim was to improve the efficiency of the engine by optimizing the fuel injection system. Fuel sprays were visualised by an optical diagnostic system. Images of fuel sprays were produced under various testing conditions, by changing the line pressure, nozzle size, injection frequency, etc. The atomisers were a high-frequency microfluidic dispensing system and a standard low flow-rate fuel injector. A series of image processing procedures were developed in order to acquire information from the laser-scattering images. This paper presents the macroscopic characterisation of Jet fuel (JP8) sprays. We observed the droplet density distribution, tip velocity, and spray-cone angle against line-pressure and nozzle-size. The analysis was performed for low line-pressure (up to 10 bar) and short injection period (1-2 ms). Local velocity components were measured by applying particle image velocimetry (PIV) on double-exposure images. The discharge velocity was lower in the micro dispensing nozzle sprays and the tip penetration slowed down at higher rates compared to the gasoline injector. The PIV test confirmed that the gasoline injector produced sprays with higher velocity elements at the centre and the tip regions.
Underwater image enhancement based on the dark channel prior and attenuation compensation
NASA Astrophysics Data System (ADS)
Guo, Qingwen; Xue, Lulu; Tang, Ruichun; Guo, Lingrui
2017-10-01
Aimed at the two problems of underwater imaging, fog effect and color cast, an Improved Segmentation Dark Channel Prior (ISDCP) defogging method is proposed to solve the fog effects caused by physical properties of water. Due to mass refraction of light in the process of underwater imaging, fog effects would lead to image blurring. And color cast is closely related to different degree of attenuation while light with different wavelengths is traveling in water. The proposed method here integrates the ISDCP and quantitative histogram stretching techniques into the image enhancement procedure. Firstly, the threshold value is set during the refinement process of the transmission maps to identify the original mismatching, and to conduct the differentiated defogging process further. Secondly, a method of judging the propagating distance of light is adopted to get the attenuation degree of energy during the propagation underwater. Finally, the image histogram is stretched quantitatively in Red-Green-Blue channel respectively according to the degree of attenuation in each color channel. The proposed method ISDCP can reduce the computational complexity and improve the efficiency in terms of defogging effect to meet the real-time requirements. Qualitative and quantitative comparison for several different underwater scenes reveals that the proposed method can significantly improve the visibility compared with previous methods.
A novel parallel architecture for local histogram equalization
NASA Astrophysics Data System (ADS)
Ohannessian, Mesrob I.; Choueiter, Ghinwa F.; Diab, Hassan
2005-07-01
Local histogram equalization is an image enhancement algorithm that has found wide application in the pre-processing stage of areas such as computer vision, pattern recognition and medical imaging. The computationally intensive nature of the procedure, however, is a main limitation when real time interactive applications are in question. This work explores the possibility of performing parallel local histogram equalization, using an array of special purpose elementary processors, through an HDL implementation that targets FPGA or ASIC platforms. A novel parallelization scheme is presented and the corresponding architecture is derived. The algorithm is reduced to pixel-level operations. Processing elements are assigned image blocks, to maintain a reasonable performance-cost ratio. To further simplify both processor and memory organizations, a bit-serial access scheme is used. A brief performance assessment is provided to illustrate and quantify the merit of the approach.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ierardi, Anna Maria; Duka, Ejona; Radaelli, Alessandro
AimTo evaluate the feasibility of image fusion (IF) of pre-procedural arterial-phase CT angiography or MR angiography with intra-procedural fluoroscopy for road-mapping in endovascular treatment of aorto-iliac steno-occlusive disease.Materials and MethodsBetween September and November, 2014, we prospectively evaluated 5 patients with chronic aorto-iliac steno-occlusive disease, who underwent endovascular treatment in the angiography suite. Fusion image road-mapping was performed using angiographic phase CT images or MR images acquired before and intra-procedural unenhanced cone-beam CT. Radiation dose of the procedure, volume of intra-procedural iodinated contrast medium, fluoroscopy time, and overall procedural time were recorded. Reasons for potential fusion imaging inaccuracies were also evaluated.ResultsImagemore » co-registration and fusion guidance were feasible in all procedures. Mean radiation dose of the procedure was 60.21 Gycm2 (range 55.02–63.75 Gycm2). The mean total procedure time was 32.2 min (range 27–38 min). The mean fluoroscopy time was 12 min and 3 s. The mean procedural iodinated contrast material dose was 24 mL (range 20–40 mL).ConclusionsIF gives Interventional Radiologists the opportunity to use new technologies in order to improve outcomes with a significant reduction of contrast media administration.« less
Real-time MRI guidance of cardiac interventions.
Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A
2017-10-01
Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J. Magn. Reson. Imaging 2017;46:935-950. © 2017 International Society for Magnetic Resonance in Medicine.
Kutbay, Uğurhan; Hardalaç, Fırat; Akbulut, Mehmet; Akaslan, Ünsal; Serhatlıoğlu, Selami
2016-06-01
This study aims investigating adjustable distant fuzzy c-means segmentation on carotid Doppler images, as well as quaternion-based convolution filters and saliency mapping procedures. We developed imaging software that will simplify the measurement of carotid artery intima-media thickness (IMT) on saliency mapping images. Additionally, specialists evaluated the present images and compared them with saliency mapping images. In the present research, we conducted imaging studies of 25 carotid Doppler images obtained by the Department of Cardiology at Fırat University. After implementing fuzzy c-means segmentation and quaternion-based convolution on all Doppler images, we obtained a model that can be analyzed easily by the doctors using a bottom-up saliency model. These methods were applied to 25 carotid Doppler images and then interpreted by specialists. In the present study, we used color-filtering methods to obtain carotid color images. Saliency mapping was performed on the obtained images, and the carotid artery IMT was detected and interpreted on the obtained images from both methods and the raw images are shown in Results. Also these results were investigated by using Mean Square Error (MSE) for the raw IMT images and the method which gives the best performance is the Quaternion Based Saliency Mapping (QBSM). 0,0014 and 0,000191 mm(2) MSEs were obtained for artery lumen diameters and plaque diameters in carotid arteries respectively. We found that computer-based image processing methods used on carotid Doppler could aid doctors' in their decision-making process. We developed software that could ease the process of measuring carotid IMT for cardiologists and help them to evaluate their findings.
Apramian, Tavis; Watling, Christopher; Lingard, Lorelei; Cristancho, Sayra
2015-10-01
Surgical research struggles to describe the relationship between procedural variations in daily practice and traditional conceptualizations of evidence. The problem has resisted simple solutions, in part, because we lack a solid understanding of how surgeons conceptualize and interact around variation, adaptation, innovation, and evidence in daily practice. This grounded theory study aims to describe the social processes that influence how procedural variation is conceptualized in the surgical workplace. Using the constructivist grounded theory methodology, semi-structured interviews with surgeons (n = 19) from four North American academic centres were collected and analysed. Purposive sampling targeted surgeons with experiential knowledge of the role of variations in the workplace. Theoretical sampling was conducted until a theoretical framework representing key processes was conceptually saturated. Surgical procedural variation was influenced by three key processes. Seeking improvement was shaped by having unsolved procedural problems, adapting in the moment, and pursuing personal opportunities. Orienting self and others to variations consisted of sharing stories of variations with others, taking stock of how a variation promoted personal interests, and placing trust in peers. Acting under cultural and material conditions was characterized by being wary, positioning personal image, showing the logic of a variation, and making use of academic resources to do so. Our findings include social processes that influence how adaptations are incubated in surgical practice and mature into innovations. This study offers a language for conceptualizing the sociocultural influences on procedural variations in surgery. Interventions to change how surgeons interact with variations on a day-to-day basis should consider these social processes in their design. © 2015 John Wiley & Sons, Ltd.
GIFTS SM EDU Radiometric and Spectral Calibrations
NASA Technical Reports Server (NTRS)
Tian, J.; Reisse, R. a.; Johnson, D. G.; Gazarik, J. J.
2007-01-01
The Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS) Sensor Module (SM) Engineering Demonstration Unit (EDU) is a high resolution spectral imager designed to measure infrared (IR) radiance using a Fourier transform spectrometer (FTS). The GIFTS instrument gathers measurements across the long-wave IR (LWIR), short/mid-wave IR (SMWIR), and visible spectral bands. The raw interferogram measurements are radiometrically and spectrally calibrated to produce radiance spectra, which are further processed to obtain atmospheric profiles via retrieval algorithms. This paper describes the processing algorithms involved in the calibration. The calibration procedures can be subdivided into three categories: the pre-calibration stage, the calibration stage, and finally, the post-calibration stage. Detailed derivations for each stage are presented in this paper.
A Design Verification of the Parallel Pipelined Image Processings
NASA Astrophysics Data System (ADS)
Wasaki, Katsumi; Harai, Toshiaki
2008-11-01
This paper presents a case study of the design and verification of a parallel and pipe-lined image processing unit based on an extended Petri net, which is called a Logical Colored Petri net (LCPN). This is suitable for Flexible-Manufacturing System (FMS) modeling and discussion of structural properties. LCPN is another family of colored place/transition-net(CPN) with the addition of the following features: integer value assignment of marks, representation of firing conditions as marks' value based formulae, and coupling of output procedures with transition firing. Therefore, to study the behavior of a system modeled with this net, we provide a means of searching the reachability tree for markings.
SU-E-J-15: Automatically Detect Patient Treatment Position and Orientation in KV Portal Images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, J; Yang, D
2015-06-15
Purpose: In the course of radiation therapy, the complex information processing workflow will Result in potential errors, such as incorrect or inaccurate patient setups. With automatic image check and patient identification, such errors could be effectively reduced. For this purpose, we developed a simple and rapid image processing method, to automatically detect the patient position and orientation in 2D portal images, so to allow automatic check of positions and orientations for patient daily RT treatments. Methods: Based on the principle of portal image formation, a set of whole body DRR images were reconstructed from multiple whole body CT volume datasets,more » and fused together to be used as the matching template. To identify the patient setup position and orientation shown in a 2D portal image, the 2D portal image was preprocessed (contrast enhancement, down-sampling and couch table detection), then matched to the template image so to identify the laterality (left or right), position, orientation and treatment site. Results: Five day’s clinical qualified portal images were gathered randomly, then were processed by the automatic detection and matching method without any additional information. The detection results were visually checked by physicists. 182 images were correct detection in a total of 200kV portal images. The correct rate was 91%. Conclusion: The proposed method can detect patient setup and orientation quickly and automatically. It only requires the image intensity information in KV portal images. This method can be useful in the framework of Electronic Chart Check (ECCK) to reduce the potential errors in workflow of radiation therapy and so to improve patient safety. In addition, the auto-detection results, as the patient treatment site position and patient orientation, could be useful to guide the sequential image processing procedures, e.g. verification of patient daily setup accuracy. This work was partially supported by research grant from Varian Medical System.« less
Procedures for cryogenic X-ray ptychographic imaging of biological samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusuf, M.; Zhang, F.; Chen, B.
Biological sample-preparation procedures have been developed for imaging human chromosomes under cryogenic conditions. A new experimental setup, developed for imaging frozen samples using beamline I13 at Diamond Light Source, is described. This paper describes the equipment and experimental procedures as well as the authors' first ptychographic reconstructions using X-rays.
Procedures for cryogenic X-ray ptychographic imaging of biological samples
Yusuf, M.; Zhang, F.; Chen, B.; ...
2017-01-12
Biological sample-preparation procedures have been developed for imaging human chromosomes under cryogenic conditions. A new experimental setup, developed for imaging frozen samples using beamline I13 at Diamond Light Source, is described. This paper describes the equipment and experimental procedures as well as the authors' first ptychographic reconstructions using X-rays.
Applying a visual language for image processing as a graphical teaching tool in medical imaging
NASA Astrophysics Data System (ADS)
Birchman, James J.; Tanimoto, Steven L.; Rowberg, Alan H.; Choi, Hyung-Sik; Kim, Yongmin
1992-05-01
Typical user interaction in image processing is with command line entries, pull-down menus, or text menu selections from a list, and as such is not generally graphical in nature. Although applying these interactive methods to construct more sophisticated algorithms from a series of simple image processing steps may be clear to engineers and programmers, it may not be clear to clinicians. A solution to this problem is to implement a visual programming language using visual representations to express image processing algorithms. Visual representations promote a more natural and rapid understanding of image processing algorithms by providing more visual insight into what the algorithms do than the interactive methods mentioned above can provide. Individuals accustomed to dealing with images will be more likely to understand an algorithm that is represented visually. This is especially true of referring physicians, such as surgeons in an intensive care unit. With the increasing acceptance of picture archiving and communications system (PACS) workstations and the trend toward increasing clinical use of image processing, referring physicians will need to learn more sophisticated concepts than simply image access and display. If the procedures that they perform commonly, such as window width and window level adjustment and image enhancement using unsharp masking, are depicted visually in an interactive environment, it will be easier for them to learn and apply these concepts. The software described in this paper is a visual programming language for imaging processing which has been implemented on the NeXT computer using NeXTstep user interface development tools and other tools in an object-oriented environment. The concept is based upon the description of a visual language titled `Visualization of Vision Algorithms' (VIVA). Iconic representations of simple image processing steps are placed into a workbench screen and connected together into a dataflow path by the user. As the user creates and edits a dataflow path, more complex algorithms can be built on the screen. Once the algorithm is built, it can be executed, its results can be reviewed, and operator parameters can be interactively adjusted until an optimized output is produced. The optimized algorithm can then be saved and added to the system as a new operator. This system has been evaluated as a graphical teaching tool for window width and window level adjustment, image enhancement using unsharp masking, and other techniques.
Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.
Gutschwager, Berndt; Hollandt, Jörg
2015-12-20
We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.
Image stack alignment in full-field X-ray absorption spectroscopy using SIFT_PyOCL.
Paleo, Pierre; Pouyet, Emeline; Kieffer, Jérôme
2014-03-01
Full-field X-ray absorption spectroscopy experiments allow the acquisition of millions of spectra within minutes. However, the construction of the hyperspectral image requires an image alignment procedure with sub-pixel precision. While the image correlation algorithm has originally been used for image re-alignment using translations, the Scale Invariant Feature Transform (SIFT) algorithm (which is by design robust versus rotation, illumination change, translation and scaling) presents an additional advantage: the alignment can be limited to a region of interest of any arbitrary shape. In this context, a Python module, named SIFT_PyOCL, has been developed. It implements a parallel version of the SIFT algorithm in OpenCL, providing high-speed image registration and alignment both on processors and graphics cards. The performance of the algorithm allows online processing of large datasets.
Three-dimensional near-field MIMO array imaging using range migration techniques.
Zhuge, Xiaodong; Yarovoy, Alexander G
2012-06-01
This paper presents a 3-D near-field imaging algorithm that is formulated for 2-D wideband multiple-input-multiple-output (MIMO) imaging array topology. The proposed MIMO range migration technique performs the image reconstruction procedure in the frequency-wavenumber domain. The algorithm is able to completely compensate the curvature of the wavefront in the near-field through a specifically defined interpolation process and provides extremely high computational efficiency by the application of the fast Fourier transform. The implementation aspects of the algorithm and the sampling criteria of a MIMO aperture are discussed. The image reconstruction performance and computational efficiency of the algorithm are demonstrated both with numerical simulations and measurements using 2-D MIMO arrays. Real-time 3-D near-field imaging can be achieved with a real-aperture array by applying the proposed MIMO range migration techniques.
Optimization of oncological {sup 18}F-FDG PET/CT imaging based on a multiparameter analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menezes, Vinicius O., E-mail: vinicius@radtec.com.br; Machado, Marcos A. D.; Queiroz, Cleiton C.
2016-02-15
Purpose: This paper describes a method to achieve consistent clinical image quality in {sup 18}F-FDG scans accounting for patient habitus, dose regimen, image acquisition, and processing techniques. Methods: Oncological PET/CT scan data for 58 subjects were evaluated retrospectively to derive analytical curves that predict image quality. Patient noise equivalent count rate and coefficient of variation (CV) were used as metrics in their analysis. Optimized acquisition protocols were identified and prospectively applied to 179 subjects. Results: The adoption of different schemes for three body mass ranges (<60 kg, 60–90 kg, >90 kg) allows improved image quality with both point spread functionmore » and ordered-subsets expectation maximization-3D reconstruction methods. The application of this methodology showed that CV improved significantly (p < 0.0001) in clinical practice. Conclusions: Consistent oncological PET/CT image quality on a high-performance scanner was achieved from an analysis of the relations existing between dose regimen, patient habitus, acquisition, and processing techniques. The proposed methodology may be used by PET/CT centers to develop protocols to standardize PET/CT imaging procedures and achieve better patient management and cost-effective operations.« less
On the Implementation of a Land Cover Classification System for SAR Images Using Khoros
NASA Technical Reports Server (NTRS)
Medina Revera, Edwin J.; Espinosa, Ramon Vasquez
1997-01-01
The Synthetic Aperture Radar (SAR) sensor is widely used to record data about the ground under all atmospheric conditions. The SAR acquired images have very good resolution which necessitates the development of a classification system that process the SAR images to extract useful information for different applications. In this work, a complete system for the land cover classification was designed and programmed using the Khoros, a data flow visual language environment, taking full advantages of the polymorphic data services that it provides. Image analysis was applied to SAR images to improve and automate the processes of recognition and classification of the different regions like mountains and lakes. Both unsupervised and supervised classification utilities were used. The unsupervised classification routines included the use of several Classification/Clustering algorithms like the K-means, ISO2, Weighted Minimum Distance, and the Localized Receptive Field (LRF) training/classifier. Different texture analysis approaches such as Invariant Moments, Fractal Dimension and Second Order statistics were implemented for supervised classification of the images. The results and conclusions for SAR image classification using the various unsupervised and supervised procedures are presented based on their accuracy and performance.
Benameur, S.; Mignotte, M.; Meunier, J.; Soucy, J. -P.
2009-01-01
Image restoration is usually viewed as an ill-posed problem in image processing, since there is no unique solution associated with it. The quality of restored image closely depends on the constraints imposed of the characteristics of the solution. In this paper, we propose an original extension of the NAS-RIF restoration technique by using information fusion as prior information with application in SPECT medical imaging. That extension allows the restoration process to be constrained by efficiently incorporating, within the NAS-RIF method, a regularization term which stabilizes the inverse solution. Our restoration method is constrained by anatomical information extracted from a high resolution anatomical procedure such as magnetic resonance imaging (MRI). This structural anatomy-based regularization term uses the result of an unsupervised Markovian segmentation obtained after a preliminary registration step between the MRI and SPECT data volumes from each patient. This method was successfully tested on 30 pairs of brain MRI and SPECT acquisitions from different subjects and on Hoffman and Jaszczak SPECT phantoms. The experiments demonstrated that the method performs better, in terms of signal-to-noise ratio, than a classical supervised restoration approach using a Metz filter. PMID:19812704
An adaptive tensor voting algorithm combined with texture spectrum
NASA Astrophysics Data System (ADS)
Wang, Gang; Su, Qing-tang; Lü, Gao-huan; Zhang, Xiao-feng; Liu, Yu-huan; He, An-zhi
2015-01-01
An adaptive tensor voting algorithm combined with texture spectrum is proposed. The image texture spectrum is used to get the adaptive scale parameter of voting field. Then the texture information modifies both the attenuation coefficient and the attenuation field so that we can use this algorithm to create more significant and correct structures in the original image according to the human visual perception. At the same time, the proposed method can improve the edge extraction quality, which includes decreasing the flocculent region efficiently and making image clear. In the experiment for extracting pavement cracks, the original pavement image is processed by the proposed method which is combined with the significant curve feature threshold procedure, and the resulted image displays the faint crack signals submerged in the complicated background efficiently and clearly.
Alejo, L; Corredoira, E; Sánchez-Muñoz, F; Huerga, C; Aza, Z; Plaza-Núñez, R; Serrada, A; Bret-Zurita, M; Parrón, M; Prieto-Areyano, C; Garzón-Moll, G; Madero, R; Guibelalde, E
2018-04-09
Objective: The new 2013/59 EURATOM Directive (ED) demands dosimetric optimisation procedures without undue delay. The aim of this study was to optimise paediatric conventional radiology examinations applying the ED without compromising the clinical diagnosis. Automatic dose management software (ADMS) was used to analyse 2678 studies of children from birth to 5 years of age, obtaining local diagnostic reference levels (DRLs) in terms of entrance surface air kerma. Given local DRL for infants and chest examinations exceeded the European Commission (EC) DRL, an optimisation was performed decreasing the kVp and applying the automatic control exposure. To assess the image quality, an analysis of high-contrast resolution (HCSR), signal-to-noise ratio (SNR) and figure of merit (FOM) was performed, as well as a blind test based on the generalised estimating equations method. For newborns and chest examinations, the local DRL exceeded the EC DRL by 113%. After the optimisation, a reduction of 54% was obtained. No significant differences were found in the image quality blind test. A decrease in SNR (-37%) and HCSR (-68%), and an increase in FOM (42%), was observed. ADMS allows the fast calculation of local DRLs and the performance of optimisation procedures in babies without delay. However, physical and clinical analyses of image quality remain to be needed to ensure the diagnostic integrity after the optimisation process. Advances in knowledge: ADMS are useful to detect radiation protection problems and to perform optimisation procedures in paediatric conventional imaging without undue delay, as ED requires.
Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.
Accuracy Considerations in Image-guided Cardiac Interventions: Experience and Lessons Learned
Linte, Cristian A.; Lang, Pencilla; Rettmann, Maryam E.; Cho, Daniel S.; Holmes, David R.; Robb, Richard A.; Peters, Terry M.
2014-01-01
Motivation Medical imaging and its application in interventional guidance has revolutionized the development of minimally invasive surgical procedures leading to reduced patient trauma, fewer risks, and shorter recovery times. However, a frequently posed question with regards to an image guidance system is “how accurate is it?” On one hand, the accuracy challenge can be posed in terms of the tolerable clinical error associated with the procedure; on the other hand, accuracy is bound by the limitations of the system’s components, including modeling, patient registration, and surgical instrument tracking, all of which ultimately impact the overall targeting capabilities of the system. Methods While these processes are not unique to any interventional specialty, this paper discusses them in the context of two different cardiac image-guidance platforms: a model-enhanced ultrasound platform for intracardiac interventions and a prototype system for advanced visualization in image-guided cardiac ablation therapy. Results Pre-operative modeling techniques involving manual, semi-automatic and registration-based segmentation are discussed. The performance and limitations of clinically feasible approaches for patient registration evaluated both in the laboratory and operating room are presented. Our experience with two different magnetic tracking systems for instrument and ultrasound transducer localization is reported. Ultimately, the overall accuracy of the systems is discussed based on both in vitro and preliminary in vivo experience. Conclusion While clinical accuracy is specific to a particular patient and procedure and vastly dependent on the surgeon’s experience, the system’s engineering limitations are critical to determine whether the clinical requirements can be met. PMID:21671097
Discrimination of human and dog faces and inversion responses in domestic dogs (Canis familiaris).
Racca, Anaïs; Amadei, Eleonora; Ligout, Séverine; Guo, Kun; Meints, Kerstin; Mills, Daniel
2010-05-01
Although domestic dogs can respond to many facial cues displayed by other dogs and humans, it remains unclear whether they can differentiate individual dogs or humans based on facial cues alone and, if so, whether they would demonstrate the face inversion effect, a behavioural hallmark commonly used in primates to differentiate face processing from object processing. In this study, we first established the applicability of the visual paired comparison (VPC or preferential looking) procedure for dogs using a simple object discrimination task with 2D pictures. The animals demonstrated a clear looking preference for novel objects when simultaneously presented with prior-exposed familiar objects. We then adopted this VPC procedure to assess their face discrimination and inversion responses. Dogs showed a deviation from random behaviour, indicating discrimination capability when inspecting upright dog faces, human faces and object images; but the pattern of viewing preference was dependent upon image category. They directed longer viewing time at novel (vs. familiar) human faces and objects, but not at dog faces, instead, a longer viewing time at familiar (vs. novel) dog faces was observed. No significant looking preference was detected for inverted images regardless of image category. Our results indicate that domestic dogs can use facial cues alone to differentiate individual dogs and humans and that they exhibit a non-specific inversion response. In addition, the discrimination response by dogs of human and dog faces appears to differ with the type of face involved.
Live imaging of mitosis in the developing mouse embryonic cortex.
Pilaz, Louis-Jan; Silver, Debra L
2014-06-04
Although of short duration, mitosis is a complex and dynamic multi-step process fundamental for development of organs including the brain. In the developing cerebral cortex, abnormal mitosis of neural progenitors can cause defects in brain size and function. Hence, there is a critical need for tools to understand the mechanisms of neural progenitor mitosis. Cortical development in rodents is an outstanding model for studying this process. Neural progenitor mitosis is commonly examined in fixed brain sections. This protocol will describe in detail an approach for live imaging of mitosis in ex vivo embryonic brain slices. We will describe the critical steps for this procedure, which include: brain extraction, brain embedding, vibratome sectioning of brain slices, staining and culturing of slices, and time-lapse imaging. We will then demonstrate and describe in detail how to perform post-acquisition analysis of mitosis. We include representative results from this assay using the vital dye Syto11, transgenic mice (histone H2B-EGFP and centrin-EGFP), and in utero electroporation (mCherry-α-tubulin). We will discuss how this procedure can be best optimized and how it can be modified for study of genetic regulation of mitosis. Live imaging of mitosis in brain slices is a flexible approach to assess the impact of age, anatomy, and genetic perturbation in a controlled environment, and to generate a large amount of data with high temporal and spatial resolution. Hence this protocol will complement existing tools for analysis of neural progenitor mitosis.
NASA Astrophysics Data System (ADS)
Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert
2012-02-01
Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.
Nitrosi, Andrea; Bertolini, Marco; Borasi, Giovanni; Botti, Andrea; Barani, Adriana; Rivetti, Stefano; Pierotti, Luisa
2009-12-01
Ideally, medical x-ray imaging systems should be designed to deliver maximum image quality at an acceptable radiation risk to the patient. Quality assurance procedures are employed to ensure that these standards are maintained. A quality control protocol for direct digital radiography (DDR) systems is described and discussed. Software to automatically process and analyze the required images was developed. In this paper, the initial results obtained on equipment of different DDR manufacturers were reported. The protocol was developed to highlight even small discrepancies in standard operating performance.
Monitoring the defoliation of hardwood forests in Pennsylvania using LANDSAT. [gypsy moth surveys
NASA Technical Reports Server (NTRS)
Dottavio, C. L.; Nelson, R. F.; Williams, D. L. (Principal Investigator)
1983-01-01
An automated system for conducting annual gypsy moth defoliation surveys using LANDSAT MSS data and digital processing techniques is described. A two-step preprocessing procedure was developed that uses multitemporal data sets representing forest canopy conditions before and after defoliation to create a digital image in which all nonforest cover types are eliminated or masked out of a LANDSAT image that exhibits insect defoliation. A temporal window for defoliation assessment was identified and a statewide data base was established. A data management system to interface image analysis software with the statewide data base was developed and a cost benefit analysis of this operational system was conducted.
A survey of GPU-based acceleration techniques in MRI reconstructions
Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou
2018-01-01
Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community. PMID:29675361
A survey of GPU-based acceleration techniques in MRI reconstructions.
Wang, Haifeng; Peng, Hanchuan; Chang, Yuchou; Liang, Dong
2018-03-01
Image reconstruction in magnetic resonance imaging (MRI) clinical applications has become increasingly more complicated. However, diagnostic and treatment require very fast computational procedure. Modern competitive platforms of graphics processing unit (GPU) have been used to make high-performance parallel computations available, and attractive to common consumers for computing massively parallel reconstruction problems at commodity price. GPUs have also become more and more important for reconstruction computations, especially when deep learning starts to be applied into MRI reconstruction. The motivation of this survey is to review the image reconstruction schemes of GPU computing for MRI applications and provide a summary reference for researchers in MRI community.
NASA Technical Reports Server (NTRS)
Saunders, R. S.; Spear, A. J.; Allin, P. C.; Austin, R. S.; Berman, A. L.; Chandlee, R. C.; Clark, J.; Decharon, A. V.; De Jong, E. M.; Griffith, D. G.
1992-01-01
Magellan started mapping the planet Venus on September 15, 1990, and after one cycle (one Venus day or 243 earth days) had mapped 84 percent of the planet's surface. This returned an image data volume greater than all past planetary missions combined. Spacecraft problems were experienced in flight. Changes in operational procedures and reprogramming of onboard computers minimized the amount of mapping data lost. Magellan data processing is the largest planetary image-processing challenge to date. Compilation of global maps of tectonic and volcanic features, as well as impact craters and related phenomena and surface processes related to wind, weathering, and mass wasting, has begun. The Magellan project is now in an extended mission phase, with plans for additional cycles out to 1995. The Magellan project will fill in mapping gaps, obtain a global gravity data set between mid-September 1992 and May 1993, acquire images at different view angles, and look for changes on the surface from one cycle to another caused by surface activity such as volcanism, faulting, or wind activity.
A spectral water index based on visual bands
NASA Astrophysics Data System (ADS)
Basaeed, Essa; Bhaskar, Harish; Al-Mualla, Mohammed
2013-10-01
Land-water segmentation is an important preprocessing step in a number of remote sensing applications such as target detection, environmental monitoring, and map updating. A Normalized Optical Water Index (NOWI) is proposed to accurately discriminate between land and water regions in multi-spectral satellite imagery data from DubaiSat-1. NOWI exploits the spectral characteristics of water content (using visible bands) and uses a non-linear normalization procedure that renders strong emphasize on small changes in lower brightness values whilst guaranteeing that the segmentation process remains image-independent. The NOWI representation is validated through systematic experiments, evaluated using robust metrics, and compared against various supervised classification algorithms. Analysis has indicated that NOWI has the advantages that it: a) is a pixel-based method that requires no global knowledge of the scene under investigation, b) can be easily implemented in parallel processing, c) is image-independent and requires no training, d) works in different environmental conditions, e) provides high accuracy and efficiency, and f) works directly on the input image without any form of pre-processing.
Processing Ti-25Ta-5Zr Bioalloy via Anodic Oxidation Procedure at High Voltage
NASA Astrophysics Data System (ADS)
Ionita, Daniela; Grecu, Mihaela; Dilea, Mirela; Cojocaru, Vasile Danut; Demetrescu, Ioana
2011-12-01
The current paper reports the processing of Ti-25Ta-5Zr bioalloy via anodic oxidation in NH4BF4 solution under constant potentiostatic conditions at high voltage to obtain more suitable properties for biomedical application. The maximum efficiency of the procedure is reached at highest applied voltage, when the corrosion rate in Hank's solution is decreased approxomately six times. The topography of the anodic layer has been studied using atomic force microscopy (AFM), and the results indicated that the anodic oxidation process increases the surface roughness. The AFM images indicated a different porosity for the anodized surfaces as well. After anodizing, the hydrophilic character of Ti-25Ta-5Zr samples has increased. A good correlation between corrosion rate obtained from potentiodynamic curves and corrosion rate from ions release analysis was obtained.
NASA Astrophysics Data System (ADS)
Munshi, Soumika; Datta, A. K.
2003-03-01
A technique of optically detecting the edge and skeleton of an image by defining shift operations for morphological transformation is described. A (2 × 2) source array, which acts as the structuring element of morphological operations, casts four angularly shifted optical projections of the input image. The resulting dilated image, when superimposed with the complementary input image, produces the edge image. For skeletonization, the source array casts four partially overlapped output images of the inverted input image, which is negated, and the resultant image is recorded in a CCD camera. This overlapped eroded image is again eroded and then dilated, producing an opened image. The difference between the eroded and opened image is then computed, resulting in a thinner image. This procedure of obtaining a thinned image is iterated until the difference image becomes zero, maintaining the connectivity conditions. The technique has been optically implemented using a single spatial modulator and has the advantage of single-instruction parallel processing of the image. The techniques have been tested both for binary and grey images.
Cloud Engineering Principles and Technology Enablers for Medical Image Processing-as-a-Service.
Bao, Shunxing; Plassard, Andrew J; Landman, Bennett A; Gokhale, Aniruddha
2017-04-01
Traditional in-house, laboratory-based medical imaging studies use hierarchical data structures (e.g., NFS file stores) or databases (e.g., COINS, XNAT) for storage and retrieval. The resulting performance from these approaches is, however, impeded by standard network switches since they can saturate network bandwidth during transfer from storage to processing nodes for even moderate-sized studies. To that end, a cloud-based "medical image processing-as-a-service" offers promise in utilizing the ecosystem of Apache Hadoop, which is a flexible framework providing distributed, scalable, fault tolerant storage and parallel computational modules, and HBase, which is a NoSQL database built atop Hadoop's distributed file system. Despite this promise, HBase's load distribution strategy of region split and merge is detrimental to the hierarchical organization of imaging data (e.g., project, subject, session, scan, slice). This paper makes two contributions to address these concerns by describing key cloud engineering principles and technology enhancements we made to the Apache Hadoop ecosystem for medical imaging applications. First, we propose a row-key design for HBase, which is a necessary step that is driven by the hierarchical organization of imaging data. Second, we propose a novel data allocation policy within HBase to strongly enforce collocation of hierarchically related imaging data. The proposed enhancements accelerate data processing by minimizing network usage and localizing processing to machines where the data already exist. Moreover, our approach is amenable to the traditional scan, subject, and project-level analysis procedures, and is compatible with standard command line/scriptable image processing software. Experimental results for an illustrative sample of imaging data reveals that our new HBase policy results in a three-fold time improvement in conversion of classic DICOM to NiFTI file formats when compared with the default HBase region split policy, and nearly a six-fold improvement over a commonly available network file system (NFS) approach even for relatively small file sets. Moreover, file access latency is lower than network attached storage.
USDA-ARS?s Scientific Manuscript database
Cleaning and sanitation of production surfaces and equipment plays a critical role in lowering the risk of food borne illness associated with consumption of fresh-cut produce. Visual observation and sampling methods including ATP tests and cell culturing are commonly used to monitor the effectivenes...
The ability to effectively use remotely sensed data for environmental spatial analysis is dependent on understanding the underlying procedures and associated variances attributed to the data processing and image analysis technique. Equally important, also, is understanding the er...
Mental Visualization of Objects from Cross-Sectional Images
ERIC Educational Resources Information Center
Wu, Bing; Klatzky, Roberta L.; Stetten, George D.
2012-01-01
We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object…
Calcium (Ca2+) waves data calibration and analysis using image processing techniques
2013-01-01
Background Calcium (Ca2+) propagates within tissues serving as an important information carrier. In particular, cilia beat frequency in oviduct cells is partially regulated by Ca2+ changes. Thus, measuring the calcium density and characterizing the traveling wave plays a key role in understanding biological phenomena. However, current methods to measure propagation velocities and other wave characteristics involve several manual or time-consuming procedures. This limits the amount of information that can be extracted, and the statistical quality of the analysis. Results Our work provides a framework based on image processing procedures that enables a fast, automatic and robust characterization of data from two-filter fluorescence Ca2+ experiments. We calculate the mean velocity of the wave-front, and use theoretical models to extract meaningful parameters like wave amplitude, decay rate and time of excitation. Conclusions Measurements done by different operators showed a high degree of reproducibility. This framework is also extended to a single filter fluorescence experiments, allowing higher sampling rates, and thus an increased accuracy in velocity measurements. PMID:23679062
NASA Astrophysics Data System (ADS)
Krasilenko, Vladimir G.; Nikolsky, Alexander I.; Zaitsev, Alexandr V.; Voloshin, Victor M.
2001-03-01
Historic information regarding the appearance and creation of fundamentals of algebra-logical apparatus-`equivalental algebra' for description of neuro-nets paradigms and algorithms is considered which is unification of theory of neuron nets (NN), linear algebra and the most generalized neuro-biology extended for matrix case. A survey is given of `equivalental models' of neuron nets and associative memory is suggested new, modified matrix-tenzor neurological equivalental models (MTNLEMS) are offered with double adaptive-equivalental weighing (DAEW) for spatial-non- invariant recognition (SNIR) and space-invariant recognition (SIR) of 2D images (patterns). It is shown, that MTNLEMS DAEW are the most generalized, they can describe the processes in NN both within the frames of known paradigms and within new `equivalental' paradigm of non-interaction type, and the computing process in NN under using the offered MTNLEMs DAEW is reduced to two-step and multi-step algorithms and step-by-step matrix-tenzor procedures (for SNIR) and procedures of defining of space-dependent equivalental functions from two images (for SIR).
Wavelet imaging cleaning method for atmospheric Cherenkov telescopes
NASA Astrophysics Data System (ADS)
Lessard, R. W.; Cayón, L.; Sembroski, G. H.; Gaidos, J. A.
2002-07-01
We present a new method of image cleaning for imaging atmospheric Cherenkov telescopes. The method is based on the utilization of wavelets to identify noise pixels in images of gamma-ray and hadronic induced air showers. This method selects more signal pixels with Cherenkov photons than traditional image processing techniques. In addition, the method is equally efficient at rejecting pixels with noise alone. The inclusion of more signal pixels in an image of an air shower allows for a more accurate reconstruction, especially at lower gamma-ray energies that produce low levels of light. We present the results of Monte Carlo simulations of gamma-ray and hadronic air showers which show improved angular resolution using this cleaning procedure. Data from the Whipple Observatory's 10-m telescope are utilized to show the efficacy of the method for extracting a gamma-ray signal from the background of hadronic generated images.
PIZZARO: Forensic analysis and restoration of image and video data.
Kamenicky, Jan; Bartos, Michal; Flusser, Jan; Mahdian, Babak; Kotera, Jan; Novozamsky, Adam; Saic, Stanislav; Sroubek, Filip; Sorel, Michal; Zita, Ales; Zitova, Barbara; Sima, Zdenek; Svarc, Petr; Horinek, Jan
2016-07-01
This paper introduces a set of methods for image and video forensic analysis. They were designed to help to assess image and video credibility and origin and to restore and increase image quality by diminishing unwanted blur, noise, and other possible artifacts. The motivation came from the best practices used in the criminal investigation utilizing images and/or videos. The determination of the image source, the verification of the image content, and image restoration were identified as the most important issues of which automation can facilitate criminalists work. Novel theoretical results complemented with existing approaches (LCD re-capture detection and denoising) were implemented in the PIZZARO software tool, which consists of the image processing functionality as well as of reporting and archiving functions to ensure the repeatability of image analysis procedures and thus fulfills formal aspects of the image/video analysis work. Comparison of new proposed methods with the state of the art approaches is shown. Real use cases are presented, which illustrate the functionality of the developed methods and demonstrate their applicability in different situations. The use cases as well as the method design were solved in tight cooperation of scientists from the Institute of Criminalistics, National Drug Headquarters of the Criminal Police and Investigation Service of the Police of the Czech Republic, and image processing experts from the Czech Academy of Sciences. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Yagahara, Ayako; Yokooka, Yuki; Jiang, Guoqian; Tsuji, Shintarou; Fukuda, Akihisa; Nishimoto, Naoki; Kurowarabi, Kunio; Ogasawara, Katsuhiko
2018-03-01
Describing complex mammography examination processes is important for improving the quality of mammograms. It is often difficult for experienced radiologic technologists to explain the process because their techniques depend on their experience and intuition. In our previous study, we analyzed the process using a new bottom-up hierarchical task analysis and identified key components of the process. Leveraging the results of the previous study, the purpose of this study was to construct a mammographic examination process ontology to formally describe the relationships between the process and image evaluation criteria to improve the quality of mammograms. First, we identified and created root classes: task, plan, and clinical image evaluation (CIE). Second, we described an "is-a" relation referring to the result of the previous study and the structure of the CIE. Third, the procedural steps in the ontology were described using the new properties: "isPerformedBefore," "isPerformedAfter," and "isPerformedAfterIfNecessary." Finally, the relationships between tasks and CIEs were described using the "isAffectedBy" property to represent the influence of the process on image quality. In total, there were 219 classes in the ontology. By introducing new properties related to the process flow, a sophisticated mammography examination process could be visualized. In relationships between tasks and CIEs, it became clear that the tasks affecting the evaluation criteria related to positioning were greater in number than those for image quality. We developed a mammographic examination process ontology that makes knowledge explicit for a comprehensive mammography process. Our research will support education and help promote knowledge sharing about mammography examination expertise.
Insight, working through, and practice: the role of procedural knowledge.
Rosenblatt, Allan
2004-01-01
A conception of insight is proposed, based on a systems and information-processing framework and using current neuroscience concepts, as an integration of information that results in a new symbolization of experience with a significant change in self-image and a transformation of non-declarative procedural knowledge into declarative knowledge. Since procedural memory and knowledge, seen to include emotional and relationship issues, is slow to change, durable emotional and behavioral change often requires repeated practice, a need not explicitly addressed in standard psychoanalytic technique. Working through is thus seen as also encompassing nondynamic factors. The application of these ideas to therapeutic technique suggests possible therapeutic interventions beyond interpretation. An illustrative clinical vignette is presented.
Wilkins, Ruth; Flegal, Farrah; Knoll, Joan H.M.; Rogan, Peter K.
2017-01-01
Accurate digital image analysis of abnormal microscopic structures relies on high quality images and on minimizing the rates of false positive (FP) and negative objects in images. Cytogenetic biodosimetry detects dicentric chromosomes (DCs) that arise from exposure to ionizing radiation, and determines radiation dose received based on DC frequency. Improvements in automated DC recognition increase the accuracy of dose estimates by reclassifying FP DCs as monocentric chromosomes or chromosome fragments. We also present image segmentation methods to rank high quality digital metaphase images and eliminate suboptimal metaphase cells. A set of chromosome morphology segmentation methods selectively filtered out FP DCs arising primarily from sister chromatid separation, chromosome fragmentation, and cellular debris. This reduced FPs by an average of 55% and was highly specific to these abnormal structures (≥97.7%) in three samples. Additional filters selectively removed images with incomplete, highly overlapped, or missing metaphase cells, or with poor overall chromosome morphologies that increased FP rates. Image selection is optimized and FP DCs are minimized by combining multiple feature based segmentation filters and a novel image sorting procedure based on the known distribution of chromosome lengths. Applying the same image segmentation filtering procedures to both calibration and test samples reduced the average dose estimation error from 0.4 Gy to <0.2 Gy, obviating the need to first manually review these images. This reliable and scalable solution enables batch processing for multiple samples of unknown dose, and meets current requirements for triage radiation biodosimetry of high quality metaphase cell preparations. PMID:29026522
TH-A-18A-01: Innovation in Clinical Breast Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, B; Yang, K; Yaffe, M
Several novel modalities have been or are on the verge of being introduced into the breast imaging clinic. These include tomosynthesis imaging, dedicated breast CT, contrast-enhanced digital mammography, and automated breast ultrasound, all of which are covered in this course. Tomosynthesis and dedicated breast CT address the problem of tissue superimposition that limits mammography screening performance, by improved or full resolution of the 3D breast morphology. Contrast-enhanced digital mammography provides functional information that allows for visualization of tumor angiogenesis. 3D breast ultrasound has high sensitivity for tumor detection in dense breasts, but the imaging exam was traditionally performed by radiologists.more » In automated breast ultrasound, the scan is performed in an automated fashion, making for a more practical imaging tool, that is now used as an adjunct to digital mammography in breast cancer screening. This course will provide medical physicists with an in-depth understanding of the imaging physics of each of these four novel imaging techniques, as well as the rationale and implementation of QC procedures. Further, basic clinical applications and work flow issues will be discussed. Learning Objectives: To be able to describe the underlying physical and physiological principles of each imaging technique, and to understand the corresponding imaging acquisition process. To be able to describe the critical system components and their performance requirements. To understand the rationale and implementation of quality control procedures, as well as regulatory requirements for systems with FDA approval. To learn about clinical applications and understand risks and benefits/strength and weakness of each modality in terms of clinical breast imaging.« less
Optimized suppression of coherent noise from seismic data using the Karhunen-Loève transform
NASA Astrophysics Data System (ADS)
Montagne, Raúl; Vasconcelos, Giovani L.
2006-07-01
Signals obtained in land seismic surveys are usually contaminated with coherent noise, among which the ground roll (Rayleigh surface waves) is of major concern for it can severely degrade the quality of the information obtained from the seismic record. This paper presents an optimized filter based on the Karhunen-Loève transform for processing seismic images contaminated with ground roll. In this method, the contaminated region of the seismic record, to be processed by the filter, is selected in such way as to correspond to the maximum of a properly defined coherence index. The main advantages of the method are that the ground roll is suppressed with negligible distortion of the remnant reflection signals and that the filtering procedure can be automated. The image processing technique described in this study should also be relevant for other applications where coherent structures embedded in a complex spatiotemporal pattern need to be identified in a more refined way. In particular, it is argued that the method is appropriate for processing optical coherence tomography images whose quality is often degraded by coherent noise (speckle).
Hwang, Jae Youn; Wachsmann-Hogiu, Sebastian; Ramanujan, V. Krishnan; Ljubimova, Julia; Gross, Zeev; Gray, Harry B.; Medina-Kauwe, Lali K.; Farkas, Daniel L.
2012-01-01
Purpose Several established optical imaging approaches have been applied, usually in isolation, to preclinical studies; however, truly useful in vivo imaging may require a simultaneous combination of imaging modalities to examine dynamic characteristics of cells and tissues. We developed a new multimode optical imaging system designed to be application-versatile, yielding high sensitivity, and specificity molecular imaging. Procedures We integrated several optical imaging technologies, including fluorescence intensity, spectral, lifetime, intravital confocal, two-photon excitation, and bioluminescence, into a single system that enables functional multiscale imaging in animal models. Results The approach offers a comprehensive imaging platform for kinetic, quantitative, and environmental analysis of highly relevant information, with micro-to-macroscopic resolution. Applied to small animals in vivo, this provides superior monitoring of processes of interest, represented here by chemo-/nanoconstruct therapy assessment. Conclusions This new system is versatile and can be optimized for various applications, of which cancer detection and targeted treatment are emphasized here. PMID:21874388
NASA Technical Reports Server (NTRS)
1990-01-01
Although not available to all patients with narrowed arteries, balloon angioplasty has expanded dramatically since its introduction with an estimated further growth to 562,000 procedures in the U.S. alone by 1992. Growth has fueled demand for higher quality imaging systems that allow the cardiologist to be more accurate and increase the chances of a successful procedure. A major advance is the Digital Cardiac Imaging (DCI) System designed by Philips Medical Systems International, Best, The Netherlands and marketed in the U.S. by Philips Medical Systems North America Company. The key benefit is significantly improved real-time imaging and the ability to employ image enhancement techniques to bring out added details. Using a cordless control unit, the cardiologist can manipulate images to make immediate assessment, compare live x-ray and roadmap images by placing them side-by-side on monitor screens, or compare pre-procedure and post procedure conditions. The Philips DCI improves the cardiologist's precision by expanding the information available to him.
Radiation levels and image quality in patients undergoing chest X-ray examinations
NASA Astrophysics Data System (ADS)
de Oliveira, Paulo Márcio Campos; do Carmo Santana, Priscila; de Sousa Lacerda, Marco Aurélio; da Silva, Teógenes Augusto
2017-11-01
Patient dose monitoring for different radiographic procedures has been used as a parameter to evaluate the performance of radiology services; skin entrance absorbed dose values for each type of examination were internationally established and recommended aiming patient protection. In this work, a methodology for dose evaluation was applied to three diagnostic services: one with a conventional film and two with digital computerized radiography processing techniques. The x-ray beam parameters were selected and "doses" (specifically the entrance surface and incident air kerma) were evaluated based on images approved in European criteria during postero-anterior (PA) and lateral (LAT) incidences. Data were collected from 200 patients related to 200 PA and 100 LAT incidences. Results showed that doses distributions in the three diagnostic services were very different; the best relation between dose and image quality was found in the institution with the chemical film processing. This work contributed for disseminating the radiation protection culture by emphasizing the need of a continuous dose reduction without losing the quality of the diagnostic image.
John, Susan D; Moore, Quentin T; Herrmann, Tracy; Don, Steven; Powers, Kevin; Smith, Susan N; Morrison, Greg; Charkot, Ellen; Mills, Thalia T; Rutz, Lois; Goske, Marilyn J
2013-10-01
Transition from film-screen to digital radiography requires changes in radiographic technique and workflow processes to ensure that the minimum radiation exposure is used while maintaining diagnostic image quality. Checklists have been demonstrated to be useful tools for decreasing errors and improving safety in several areas, including commercial aviation and surgical procedures. The Image Gently campaign, through a competitive grant from the FDA, developed a checklist for technologists to use during the performance of digital radiography in pediatric patients. The checklist outlines the critical steps in digital radiography workflow, with an emphasis on steps that affect radiation exposure and image quality. The checklist and its accompanying implementation manual and practice quality improvement project are open source and downloadable at www.imagegently.org. The authors describe the process of developing and testing the checklist and offer suggestions for using the checklist to minimize radiation exposure to children during radiography. Copyright © 2013 American College of Radiology. All rights reserved.
Ströhl, Florian; Kaminski, Clemens F
2015-01-16
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
NASA Astrophysics Data System (ADS)
Ströhl, Florian; Kaminski, Clemens F.
2015-03-01
We demonstrate the reconstruction of images obtained by multifocal structured illumination microscopy, MSIM, using a joint Richardson-Lucy, jRL-MSIM, deconvolution algorithm, which is based on an underlying widefield image-formation model. The method is efficient in the suppression of out-of-focus light and greatly improves image contrast and resolution. Furthermore, it is particularly well suited for the processing of noise corrupted data. The principle is verified on simulated as well as experimental data and a comparison of the jRL-MSIM approach with the standard reconstruction procedure, which is based on image scanning microscopy, ISM, is made. Our algorithm is efficient and freely available in a user friendly software package.
Imaging of dynamic ion signaling during root gravitropism.
Monshausen, Gabriele B
2015-01-01
Gravitropic signaling is a complex process that requires the coordinated action of multiple cell types and tissues. Ca(2+) and pH signaling are key components of gravitropic signaling cascades and can serve as useful markers to dissect the molecular machinery mediating plant gravitropism. To monitor dynamic ion signaling, imaging approaches combining fluorescent ion sensors and confocal fluorescence microscopy are employed, which allow the visualization of pH and Ca(2+) changes at the level of entire tissues, while also providing high spatiotemporal resolution. Here, I describe procedures to prepare Arabidopsis seedlings for live cell imaging and to convert a microscope for vertical stage fluorescence microscopy. With this imaging system, ion signaling can be monitored during all phases of the root gravitropic response.
Improved Reconstruction of Radio Holographic Signal for Forward Scatter Radar Imaging
Hu, Cheng; Liu, Changjiang; Wang, Rui; Zeng, Tao
2016-01-01
Forward scatter radar (FSR), as a specially configured bistatic radar, is provided with the capabilities of target recognition and classification by the Shadow Inverse Synthetic Aperture Radar (SISAR) imaging technology. This paper mainly discusses the reconstruction of radio holographic signal (RHS), which is an important procedure in the signal processing of FSR SISAR imaging. Based on the analysis of signal characteristics, the method for RHS reconstruction is improved in two parts: the segmental Hilbert transformation and the reconstruction of mainlobe RHS. In addition, a quantitative analysis of the method’s applicability is presented by distinguishing between the near field and far field in forward scattering. Simulation results validated the method’s advantages in improving the accuracy of RHS reconstruction and imaging. PMID:27164114
Childhood Ependymoma Treatment
... a neuro exam or a neurologic exam. MRI (magnetic resonance imaging) with gadolinium : A procedure that uses a magnet, ... the picture. This procedure is also called nuclear magnetic resonance imaging (NMRI). Lumbar puncture : A procedure used to collect ...
Reducing uncertainty on satellite image classification through spatiotemporal reasoning
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Nikolakaki, Natassa; Psillakis, Periklis; Miliaresis, George; Xanthakis, Michail
2014-05-01
The natural habitat constantly endures both inherent natural and human-induced influences. Remote sensing has been providing monitoring oriented solutions regarding the natural Earth surface, by offering a series of tools and methodologies which contribute to prudent environmental management. Processing and analysis of multi-temporal satellite images for the observation of the land changes include often classification and change-detection techniques. These error prone procedures are influenced mainly by the distinctive characteristics of the study areas, the remote sensing systems limitations and the image analysis processes. The present study takes advantage of the temporal continuity of multi-temporal classified images, in order to reduce classification uncertainty, based on reasoning rules. More specifically, pixel groups that temporally oscillate between classes are liable to misclassification or indicate problematic areas. On the other hand, constant pixel group growth indicates a pressure prone area. Computational tools are developed in order to disclose the alterations in land use dynamics and offer a spatial reference to the pressures that land use classes endure and impose between them. Moreover, by revealing areas that are susceptible to misclassification, we propose specific target site selection for training during the process of supervised classification. The underlying objective is to contribute to the understanding and analysis of anthropogenic and environmental factors that influence land use changes. The developed algorithms have been tested upon Landsat satellite image time series, depicting the National Park of Ainos in Kefallinia, Greece, where the unique in the world Abies cephalonica grows. Along with the minor changes and pressures indicated in the test area due to harvesting and other human interventions, the developed algorithms successfully captured fire incidents that have been historically confirmed. Overall, the results have shown that the use of the suggested procedures can contribute to the reduction of the classification uncertainty and support the existing knowledge regarding the pressure among land-use changes.
Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)
NASA Astrophysics Data System (ADS)
Ferayorni, Andrew; Beard, Andrew; Cole, Wes; Gregory, Scott; Wöeger, Friedrich
2016-08-01
The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit. The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument's functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI). In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized through completion of this testing, and share lessons learned from the bottoms up testing process.
Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
NASA Astrophysics Data System (ADS)
Cilip, Christopher M.; Allaf, Mohamad E.; Fried, Nathaniel M.
2012-02-01
A noninvasive approach to vasectomy may eliminate male fear of complications related to surgery and increase its acceptance. Noninvasive laser thermal occlusion of the canine vas deferens has recently been reported. In this study, optical coherence tomography (OCT) and high-frequency ultrasound (HFUS) are compared for monitoring laser thermal coagulation of the vas in an acute canine model. Bilateral noninvasive laser coagulation of the vas was performed in 6 dogs (n=12 vasa) using a Ytterbium fiber laser wavelength of 1075 nm, incident power of 9.0 W, pulse duration of 500 ms, pulse rate of 1 Hz, and 3-mm-diameter spot. Cryogen spray cooling was used to prevent skin burns during the procedure. An OCT system with endoscopic probe and a HFUS system with 20-MHz transducer were used to image the vas immediately before and after the procedure. Vasa were then excised and processed for gross and histologic analysis for comparison with OCT and HFUS images. OCT provided high-resolution, superficial imaging of the compressed vas within the vas ring clamp, while HFUS provided deeper imaging of the vas held manually in the scrotal fold. Both OCT and high HFUS are promising imaging modalities for real-time confirmation of vas occlusion during noninvasive laser vasectomy.